Comparing Machine Learning Models

  • Posted on: 4 October 2018
  • By: C.J.
graphic showing flowers and text describing it as mashed potatoes

It's not yet CoreML2, but I've been testing how Apple has integrated Machine Learning into its toolbox.

Using the same Vision system supplied as part of iOS, I was able to test several different models.

In Xcode, I can import a 'MLModel' downloaded from Apple, or one created using Turi, and compare results just by dragging and dropping them into my project.

The short answer is simple and was immediately apparent: size does matter. If you want depth to your answers, the model has to have enough to work with.

The largest model I tested was over 550MB and slowed everything down. It was a good learner, but only in nuance.

The smallest model did the worst, and there were options that were almost as small. From 5MB to 100MB+, the sweet spot seemed to be 17MB for MobileNet. Nothing sophisticated, however. Just the basics.

import CoreML
import Vision
        if let model = try? VNCoreMLModel(for: inceptionv3Model.model) {
            let process = VNCoreMLRequest(model: model) { (request, error) in
                if let results = request.results as? [VNClassificationObservation] {
                    self.resultsDict = Array(results.prefix(12))
                    self.tableView.reloadData()
                }
            }
            if let imageData = inputImageView.image?.jpegData(compressionQuality: 1.0) {
                let handler = VNImageRequestHandler(data: imageData, options: [:])
                try? handler.perform([process])
            }
        }

Don't confuse your flowers for mashed potatoes, and you'll be fine. Here's a good place to start: https://developer.apple.com/documentation/coreml/getting_a_core_ml_model

Sample models are here: https://developer.apple.com/machine-learning/build-run-models/