When to Use Apple MLX vs Core ML

May 28, 2025#AI#ML#MLX#CoreML

Apple’s MLX and Core ML are two distinct machine learning frameworks, each designed for specific stages of the ML workflow on Apple devices.

Core ML was first released by Apple at WWDC 2017 for on-device machine learning on iOS, while MLX was introduced in late 2023 as a new machine learning framework optimized for Apple Silicon Macs.

MLX

MLX is a NumPy-like array framework designed for efficient and flexible machine learning on Apple silicon, brought to you by Apple machine learning research. The Python API closely follows NumPy with a few exceptions. MLX also has a fully featured C++ API which closely follows the Python API.

The design of MLX is inspired by frameworks like PyTorch, Jax, and ArrayFire. A notable difference from these frameworks and MLX is the unified memory model. Arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without performing data copies. Currently supported device types are the CPU and GPU.

When you perform operations in MLX, no computation actually happens. Instead a compute graph is recorded. The actual computation only happens if an eval() is performed.

Make sure you’re on a Mac with Apple Silicon and have Python set up. Then install MLX:

pip install mlx

Here’s a short and simple MLX example of linear regression to fit y = 2x + 1.

import mlx.core as mx
import mlx.nn as nn
import mlx.optimizers as optim
import numpy as np

# Data: y = 2x + 1 + noise
x = mx.array(np.random.rand(100, 1).astype(np.float32))
y = 2 * x + 1 + 0.1 * mx.array(np.random.randn(100, 1).astype(np.float32))

# Model
model = nn.Linear(1, 1)
opt = optim.SGD(1e-1)

# Training
for _ in range(100):
    def loss(): return mx.mean((model(x) - y) ** 2)
    l, grads = mx.value_and_grad(loss)()
    opt.update(model, grads)

print("Trained weight:", model.weight.item(), "bias:", model.bias.item())

There’s also MLX Swift which expands MLX to the Swift language, making research and experimentation easier on Apple silicon.

import MLX
import MLXNN
import MLXOptimizers

// Generate synthetic data: y = 2x + 1 + noise
let x = MLXArray(randomUniform: [100, 1])
let y = 2 * x + 1 + MLXArray(randomNormal: [100, 1]) * 0.1

// Define a simple linear model
let model = Linear(inputSize: 1, outputSize: 1)
let optimizer = SGD(learningRate: 0.1)

// Training loop
for _ in 0..<100 {
    let (loss, grads) = valueWithGradient(at: model) { model -> MLXArray in
        let preds = model(x)
        return mean((preds - y) ** 2)
    }
    optimizer.update(&model, grads)
}

print("Trained weight: \(model.weight), bias: \(model.bias)")

More info: Awesome MLX, On-device ML research with MLX and Swift, Deploying LLMs locally with Apple’s MLX framework.

Core ML

Core ML is Apple’s machine learning framework designed for deploying and running trained models efficiently on Apple devices, including iOS, macOS, watchOS, and tvOS.

  • Inference Engine: Optimized for fast and efficient on-device inference, utilizing hardware accelerators like the GPU and Apple Neural Engine (ANE).
  • Model Conversion: Supports conversion from various model formats (e.g., TensorFlow, PyTorch) to Core ML format using tools like coremltools.
  • Swift Integration: Seamlessly integrates with Swift and Objective-C, allowing you to incorporate ML models into your apps with minimal effort.
  • Privacy and Performance: Ensures user data privacy by performing all computations on-device, eliminating the need for server-side processing.

Here’s an example to classify an image. You can use any of Apple’s pre-trained image classification models. MobileNetV2 is lightweight, fast, good accuracy for general use. Assumes you’ve added MobileNetV2.mlmodel to your Xcode project.

import UIKit
import CoreML
import Vision

// Load the Core ML model
guard let model = try? VNCoreMLModel(for: MobileNetV2().model) else {
    fatalError("Failed to load model")
}

// Prepare input image (UIImage)
let inputImage = UIImage(named: "dog.jpg")!
let ciImage = CIImage(image: inputImage)!

// Create a Vision request
let request = VNCoreMLRequest(model: model) { request, error in
    if let results = request.results as? [VNClassificationObservation],
       let top = results.first {
        print("Prediction: \(top.identifier) - \(Int(top.confidence * 100))%")
    }
}

// Run the model
let handler = VNImageRequestHandler(ciImage: ciImage)
try? handler.perform([request])

Embedding into apps

Embedding MLX models directly into macOS apps is currently quite challenging.

MLX is mainly designed as a development and experimentation framework for training and prototyping models on Apple Silicon Macs. It doesn’t yet have a streamlined or official way to export models into a deployable format that can be easily embedded and run inside macOS or iOS apps.

By contrast, Core ML is built specifically for deployment — with tools and APIs that make it straightforward to integrate models into apps for inference.