[vc_row][vc_column][vc_column_text]In a few years, the world will be filled with billions of small, connected, intelligent devices. Many of these devices will be embedded in our homes, our cities, our vehicles, and our factories. Some of these devices will be carried in our pockets or worn on our bodies. The proliferation of small computing devices will disrupt every industrial sector and play a key role in the next evolution of personal computing.

Most of these devices will be small and mobile. Many of them will have limited memories (as small as 32 KB) and weak processors (as low as 20 MIPS). Almost all of them will use a variety of sensors to monitor their surroundings and interact with their users. Most importantly, many of these devices will rely on machine-learned models to interpret the signals from their sensors, to make accurate inferences and predictions about their environment, and, ultimately, to make intelligent decisions. Offloading this intelligence to the cloud is often impractical, due to latency, bandwidth, privacy, reliability, and connectivity issues. Therefore, we need to execute a significant portion of the intelligent pipeline on the edge devices themselves.

Modern state-of-the-art machine learning techniques are not a good fit for execution on small, resource-impoverished devices. Today’s machine learning algorithms are designed to run on powerful servers, which are often accelerated with special GPU and FPGA hardware. Therefore, our primary goal is to develop machine learning algorithms that are tailored for embedded platforms. Rather than just optimizing predictive accuracy, our techniques attempt to balance accuracy with runtime resource consumption.

In general, a Machine learning problem considers a set of n samples of data and then tries to predict properties of unknown data. If each sample is more than a single number and, for instance, a multi-dimensional entry (aka multivariate data), it is said to have several attributes or features.

We can separate learning problems in a few large categories:

  • Supervised learning, in which the data comes with additional attributes that we want to predict. This problem can be either:

    • Classification: samples belong to two or more classes and we want to learn from already labeled data how to predict the class of unlabeled data. An example of classification problem would be the handwritten digit recognition example, in which the aim is to assign each input vector to one of a finite number of discrete categories. Another way to think of classification is as a discrete (as opposed to continuous) form of supervised learning where one has a limited number of categories and for each of the n samples provided, one is to try to label them with the correct category or class.
    • Regression: if the desired output consists of one or more continuous variables, then the task is called regression. An example of a regression problem would be the prediction of the length of a salmon as a function of its age and weight.
  • Unsupervised learning, in which the training data consists of a set of input vectors x without any corresponding target values. The goal in such problems may be to discover groups of similar examples within the data, where it is called clustering, or to determine the distribution of data within the input space, known as density estimation, or to project the data from a high-dimensional space down to two or three dimensions for the purpose of visualization

[/vc_column_text][/vc_column][/vc_row]