Home Audience For U & Me Ten Popular Tools and Frameworks for Artificial Intelligence

Ten Popular Tools and Frameworks for Artificial Intelligence

5
14370

This article highlights ten tools and frameworks that feature on the ‘hot list’ for artificial intelligence. A short description along with features and links is given for each tool or framework.

Let’s go on an exciting journey, discovering exactly why the following tools and frameworks are ranked so high.

1) TensorFlow: An open source software library for machine intelligence

TensorFlow is an open source software library that was originally developed by researchers and engineers working on the Google Brain Team. TensorFlow is used for numerical computation with data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multi-dimensional data arrays (tensors) communicating between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server or mobile device, with a single API.

TensorFlow provides multiple APIs. The lowest level API—TensorFlow Core—provides you with complete programming control. The higher-level APIs are built on top of TensorFlow Core and are typically easier to learn and use than TensorFlow Core. In addition, the higher-level APIs make repetitive tasks easier and more consistent between different users. A high-level API like tf.estimator helps you manage data sets, estimators, training and inference.

The central unit of data in TensorFlow is the tensor, which consists of a set of primitive values shaped into an array of any number of dimensions. A tensor’s rank is its number of dimensions.

A few Google applications using TensorFlow are listed below.

RankBrain: A large-scale deployment of deep neural nets for search ranking on www.google.com.

Inception image classification model: This is a baseline model, the result of ongoing research into highly accurate computer vision models, starting with the model that won the 2014 Imagenet image classification challenge.

SmartReply: A deep LSTM model to automatically generate email responses.

Massive multi-task networks for drug discovery: A deep neural network model for identifying promising drug candidates – built by Google in association with Stanford University.

On-device computer vision for OCR: An on-device computer vision model for optical character recognition to enable real-time translation.

Useful links

Tensorflow home: https://www.tensorflow.org

GitHub: https://github.com/tensorflow

Getting started: https://www.tensorflow.org/get_started/get_started

2) Apache SystemML: An optimal workplace for machine learning using Big Data

SystemML is the machine learning technology created at IBM. It ranks among the top-level projects at the Apache Software Foundation. It’s a flexible, scalable machine learning system.

Important characteristics

1.Algorithm customisability via R-like and Python-like languages

2.Multiple execution modes, including Spark MLContext, Spark Batch, Hadoop Batch, Standalone and JMLC (Java Machine Learning Connector)

3.Automatic optimisation based on data and cluster characteristics to ensure both efficiency and scalability

SystemML is considered as the SQL for machine learning. The latest version (1.0.0) of SystemML supports Java 8+, Scala 2.11+, Python 2.7/3.5+, Hadoop 2.6+ and Spark 2.1+.

It can be run on top of Apache Spark, where it automatically scales your data, line by line, determining whether your code should be run on the driver or an Apache Spark cluster. Future SystemML developments include additional deep learning with GPU capabilities, such as importing and running neural network architectures and pre-trained models for training.

Java Machine Learning Connector (JMLC) for SystemML

The Java Machine Learning Connector (JMLC) API is a programmatic interface for interacting with SystemML in an embedded fashion. The primary purpose of JMLC is that of a scoring API, whereby your scoring function is expressed using SystemML’s DML (Declarative Machine Learning) language. In addition to scoring, embedded SystemML can be used for tasks such as unsupervised learning (like clustering) in the context of a larger application running on a single machine.

Useful links

SystemML home: https://systemml.apache.org/

GitHub: https://github.com/apache/systemml

3) Caffe: A deep learning framework made with expression, speed and modularity in mind

The Caffe project was initiated by Yangqing Jia during the course of his Ph.D at UC Berkeley, and later developed further by Berkeley AI Research (BAIR) and community contributors. It mostly focuses on convolutional networks for computer vision applications. Caffe is a solid, popular choice for computer vision-related tasks, and you can download many successful models made by Caffe users from the Caffe Model Zoo (link below) for out-of-the-box use.

Caffe’s advantages

1) Expressive architecture encourages application and innovation. Models and optimisation are defined by configuration without hard coding. Users can switch between CPU and GPU by setting a single flag to train on a GPU machine, and then deploy to commodity clusters or mobile devices.

2) Extensible code fosters active development. In Caffe’s first year, it was forked by over 1,000 developers and had many significant changes contributed back.

3) Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60 million images per day with a single NVIDIA K40 GPU.

4) Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech and multimedia.

Useful links

Caffe home: http://caffe.berkeleyvision.org/

GitHub: https://github.com/BVLC/caffe

Caffe user group: https://groups.google.com/forum/#!forum/caffe-users

Tutorial presentation of the framework and a full-day crash course: https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p

Caffe Model Zoo: https://github.com/BVLC/caffe/wiki/Model-Zoo

4) Apache Mahout: A distributed linear algebra framework and mathematically expressive Scala DSL

Mahout is designed to let mathematicians, statisticians and data scientists quickly implement their own algorithms. Apache Spark is the recommended out-of-the-box distributed back-end or can be extended to other distributed back-ends. Its features include the following:

  • It is a mathematically expressive Scala DSL
  • Offers support for multiple distributed back-ends (including Apache Spark)
  • Has modular native solvers for CPU, GPU and CUDA acceleration

Apache Mahout currently implements collaborative filtering (CF), clustering and categorisation.

Features and applications

  • Taste CF: Taste is an open source project for CF (collaborative filtering) started by Sean Owen on SourceForge and donated to Mahout in 2008
  • Several Map-Reduce enabled clustering implementations, including k-Means, fuzzy k-Means, Canopy, Dirichlet and Mean-Shift
  • Distributed Naive Bayes and Complementary Naive Bayes classification implementations
  • Distributed fitness function capabilities for evolutionary programming
  • Matrix and vector libraries
  • Examples of all the above algorithms

Useful links

Mahout home: http://mahout.apache.org/

GitHub: https://github.com/apache/mahout

An introduction to Mahout by Grant Ingersoll: https://www.ibm.com/developerworks/library/j-mahout/

5) OpenNN: An open source class library written in C++ to implement neural networks

OpenNN (Open Neural Networks Library) was formerly known as Flood and is based on the Ph.D thesis of R. Lopez, called ‘Neural Networks for Variational Problems in Engineering’, at the Technical University of Catalonia, 2008.

OpenNN implements data mining methods as a bundle of functions. These can be embedded in other software tools using an application programming interface (API) for the interaction between the software tool and the predictive analytics tasks. The main advantage of OpenNN is its high performance. It is developed in C++ for better memory management and higher processing speed. It implements CPU parallelisation by means of OpenMP and GPU acceleration with CUDA.

The package comes with unit testing, many examples and extensive documentation. It provides an effective framework for the research and development of neural networks algorithms and applications. Neural Designer is a professional predictive analytics tool that uses OpenNN, which means that the neural engine of Neural Designer has been built using OpenNN.

OpenNN has been designed to learn from both data sets and mathematical models.

Data sets

  • Function regression
  • Pattern recognition
  • Time series prediction

Mathematical models

  • Optimal control
  • Optimal shape design

Data sets and mathematical models

  • Inverse problems

Useful links

OpenNN home: http://www.opennn.net/

OpenNN Artelnics GitHub: https://github.com/Artelnics/OpenNN

Neural Designer: https://neuraldesigner.com/

6) Torch: An open source machine learning library, a scientific computing framework, and a script language based on the Lua programming language

Torch provides a wide range of algorithms for deep machine learning. It uses the scripting language LuaJIT, and an underlying C/CUDA implementation. The core package of Torch is torch. It provides a flexible N-dimensional array or tensor, which supports basic routines for indexing, slicing, transposing, type-casting, resizing, sharing storage and cloning. The nn package is used for building neural networks.

Features

  • It is a powerful N-dimensional array
  • Has lots of routines for indexing, slicing and transposing
  • Has an amazing interface to C, via LuaJIT
  • Linear algebra routines
  • Neural network and energy-based models
  • Numeric optimisation routines
  • Fast and efficient GPU support
  • Embeddable, with ports to iOS and Android back-ends

Torch is used by the Facebook AI Research Group, IBM, Yandex and the Idiap Research Institute. It has been extended for use on Android and iOS. It has been used to build hardware implementations for data flows like those found in neural networks. Facebook has released a set of extension modules as open source software.

PyTorch is an open source machine learning library for Python, used for applications such as natural language processing. It is primarily developed by Facebook’s artificial intelligence research group, and Uber’s Pyro software for probabilistic programming has been built upon it.

Useful links

Torch Home: http://torch.ch/

GitHub: https://github.com/torch

7) Neuroph: An object-oriented neural network framework written in Java

Neuroph can be used to create and train neural networks in Java programs. It provides a Java class library as well as a GUI tool called easyNeurons for creating and training neural networks. Neuroph is a lightweight Java neural network, as well as a framework to develop common neural network architectures. It contains a well-designed, open source Java library with a small number of basic classes that correspond to basic NN concepts. It also has a nice GUI neural network editor to quickly create Java neural network components. It has been released as open source under the Apache 2.0 licence.

Neuroph’s core classes correspond to basic neural network concepts like the artificial neuron, neuron layer, neuron connections, weight, transfer function, input function, learning rule, etc. Neuroph supports common neural network architectures such as multi-layer perceptron with Backpropagation, Kohonen and Hopfield networks. All these classes can be extended and customised to create custom neural networks and learning rules. Neuroph has built-in support for image recognition.

Useful links

Neuroph home: http://neuroph.sourceforge.net/

GitHub: https://github.com/neuroph/neuroph

8) Deeplearning4j: The first commercial-grade, open source, distributed deep learning library written for Java and Scala

Deeplearning4j (DL4J) is integrated with Hadoop and Spark. DL4J is designed to be used in business environments on distributed GPUs and CPUs. Skymind is its commercial support arm, bundling Deeplearning4j and other libraries such as TensorFlow and Keras in the Skymind Intelligence Layer (SKIL, Community Edition), which is a deep learning environment that gives developers an easy, fast way to train and deploy AI models. SKIL acts as a bridge between Python data science environments and the JVM.

Advantages

  • Deeplearning4j aims to be cutting-edge plug-and-play, with more convention than configuration, which allows for fast prototyping for non-researchers.
  • It is customisable at scale.
  • DL4J can import neural net models from most major frameworks via Keras, including TensorFlow, Caffe and Theano, bridging the gap between the Python ecosystem and the JVM with a cross-team toolkit for data scientists, data engineers and DevOps. Keras is employed as Deeplearning4j’s Python API.
  • Machine learning models are served in production with Skymind’s model server.

Features

  • Distributed CPUs and GPUs
  • Java, Scala and Python APIs
  • Adapted for micro-service architecture
  • Parallel training via iterative reduce
  • Scalable on Hadoop
  • GPU support for scaling on AWS

Libraries

  • Deeplearning4J: A neural Net platform
  • ND4J: Numpy for the JVM
  • DataVec: A tool for machine learning ETL operations
  • JavaCPP: The bridge between Java and native C++
  • Arbiter: An evaluation tool for machine learning algorithms
  • RL4J: Deep reinforcement learning for the JVM

Useful links

DL4J home: https://deeplearning4j.org/

GitHub: https://github.com/deeplearning4j/deeplearning4j

SKIL: https://skymind.ai/quickstart

9) Mycroft: One of the world’s first open source assistants, ideal for anything from a science project to an enterprise software application

Mycroft runs anywhere – on a desktop computer, inside an automobile, or on a Raspberry Pi. This is open source software which can be freely remixed, extended and improved. Mycroft may be used in anything from a science project to an enterprise software application.

Useful links

MyCroft home: https://mycroft.ai

GitHub: https://github.com/MycroftAI/mycroft-core

10) OpenCog: A project that aims to build an open source artificial intelligence framework

OpenCog is a diverse assemblage of cognitive algorithms, each embodying their own innovations. But what makes the overall architecture powerful is its careful adherence to the principle of cognitive synergy. OpenCog was originally based on the 2008 release of the source code of the proprietary Novamente Cognition Engine (NCE) of Novamente LLC. The original NCE code is discussed in the PLN book (reference given below). Ongoing development of OpenCog is supported by the Artificial General Intelligence Research Institute (AGIRI), the Google Summer of Code project, and others.

OpenCog Prime is the architecture for robot and virtual embodied cognition, which defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system. OpenCog Prime’s design is primarily the work of Ben Goertzel, while the OpenCog framework is intended as a generic framework for broad-based AGI research.

OpenCog consists of the following:

  • A graph database that holds terms, atomic formulae, sentences and relationships as hypergraphs, giving them a probabilistic truth-value interpretation, dubbed the AtomSpace.
  • A satisfiability modulo theories solver, built in as a part of a generic graph query engine, for performing graph and hypergraph pattern matching (isomorphic subgraph discovery).
  • An implementation of a probabilistic reasoning engine based on probabilistic logic networks (PLN).
  • A probabilistic genetic program evolver called Meta-Optimizing Semantic Evolutionary Search, or MOSES, originally developed by Moshe Looks, who is now at Google.
  • An attention allocation system based on economic theory, called ECAN.
  • An embodiment system for interaction and learning within virtual worlds based in part on OpenPsi and Unity.
  • A natural language input system consisting of Link Grammar and RelEx, both of which employ AtomSpace-like representations for semantic and syntactic relations.
  • A natural language generation system called SegSim, with implementations NLGen and NLGen2.
  • An implementation of Psi-Theory for handling emotional states, drives and urges, dubbed OpenPsi.
  • Interfaces to Hanson Robotics robots, including emotion modelling via OpenPsi.

Useful links

OpenCog home: https://opencog.org/

GitHub: https://github.com/opencog

OpenCog Wiki: https://wiki.opencog.org/w/The_Open_Cognition_Project

5 COMMENTS

  1. Some great recommendations in here, OpenNN is a very handy little tool, I remember when it used to be called Flood! Haha, great stuff.

  2. I needed this refresher on artificial intelligence. So many misconceptions are thrown about regarding this concept that I get tired at times explaining it over and over to friends. Perhaps the most interesting point in this article is that of Uber’s Pyrosoftware.I would love to separate blog post on that.

    • Thanks for the comment, hope you are aware that Pyro build on top of PyTorch, an implementation of Torch which we have covered here. May be this is just a start and we need detailed reading to go deeper. You have to decide the tool based on the use case you are trying to solve and I am sure more implementations available based on the AI whitepapers/plaforms. enjoy the happy exploring!

  3. You know, it’s really surprising. Look at the same Uber and its autopilots. Now in some countries you can order a taxi where instead of the driver-the program! Just think of what we’ll have in 10 years…

LEAVE A REPLY

Please enter your comment!
Please enter your name here