Apple standardises machine learning for Mac and iOS apps

1
6735
Apple WWDC 2016 news

 

Apple has released Core ML for the next-generation developers. The framework is designed to help developers deploy machine learning models in Mac and iOS apps.

Core ML is developed in a way that developers need not build an all-platform level plumbing for implementing a machine learning model. The software framework is capable of serving predictions using three basic frameworks.

The frameworks included in CoreML establish a foundation for developers by providing common data types, vision for images and handling gameplay logic and behavior. Each of these frameworks offers high-level objects that can be implemented as classes in Swift language.

All the classes are already included in Swift to offer specific use case and open-ended prediction serving. The vision framework brings classes for barcodes, face detection, text detection and horizontal detection. There are general classes for general use cases as well.

Apple is expecting developers to use these high-level classes for most of the Core ML developments. There is a provision for low-level APIs to customise workflows and advanced use cases. These APIs are basically the finer-grained manipulation of predictions and models.

Beta release with limitations

CoreML being in beta stage comes with a few limitations. There is no provision in CoreML for model retraining or defeated learning. You can implement it on your own. Moreover, this feature is expected to be available in future versions of CoreML.

1 COMMENT

  1. […] An early preview of Embedded Learning Library (ELL), which has been released on GitHub, is designed to initiate the process of bringing AI to miniature devices. The software is lauded to embed AI onto bread-crumb size computer processors and enable an all new class of machine learning. […]

LEAVE A REPLY

Please enter your comment!
Please enter your name here