(Source: nexusby -- stock.adobe.com)
Canadian philosopher Marshall McLuhan once said, “We become what we behold. We shape our tools, and thereafter our tools shape us.” If so, then artificial intelligence (AI) is most unique in that the object we behold is ourselves, specifically our brain. And if that is true, it will be interesting to see how the tool that is artificial intelligence will recursively shape ourselves and our future. Democratization of the development tools that allows us to create objects imbued with AI-based capabilities will be crucial to building a bright, positive future for humanity. A company known as Edge Impulse is doing its part to ensure just that.
The paradigm shift to neural networks can be daunting for those of us embedded system developers who cut our teeth in the hey-days of procedural or even object-oriented programming. For some, it feels like giving up a bit of absolute control of the design to what on the surface seems unproven, if not downright magic. Still, the promises of machine learning on the edge (meaning moving AI algorithms from the cloud and pushing it down to microcontrollers that are found in the billions of IoT devices) are too tempting to be ignored. Fortunately, Edge Impulse provides an incredibly straightforward and, more importantly, well-documented path forward for embedded systems engineers to successfully navigate the relatively new waters of AI, machine learning, and neural networks (NN).
The chances are that at some point in embedded design, an engineer will sketch out a flowchart to understand the various states a machine will be in during its operational lifecycle. To that end, it is beneficial to understand the steps that one will encounter while using Edge Impulse to develop a customized neural network for a unique embedded application. The following summarizes those steps from the perspective of an embedded electronics engineer versus a computer scientist with a specialty in artificial intelligence.
Step 1: Acquiring Training Data
The development of a neural network requires access to data. Lots and lots of data. In short, the more data, the more accurate the future NN model will be when predicting an output based on real-world operations. Edge Impulse provides a variety of easy-to-use tools to get data from the real world to their servers to develop a custom neural network. First and foremost, they provide pre-built firmware for many popular development boards (such as TI CC1352P Launchpad, SiLabs Thunderboard Sense 2, and the Arduino Portenta) that can access the various onboard sensors and send the data streaming back to Edge Impulse. For other boards, Edge Impulse provides a suite of tools under the umbrella of their Command Line Interface (CLI) toolset available for Mac OS, Windows, and the Linux distros Ubuntu and Raspbian. The CLI requires Python3 and Node.js to be installed on your desktop. Three key tools of the CLI are:
These tools are especially useful to get sensor data from a development board that lacks direct Internet connectivity. They act as a proxy to take in data via a serial port and forward it onto the Edge Impulse servers via the host internet connection. Edge Impulse also provides a browser-based mechanism to collect data (such as voice samples or accelerometer data) from a smartphone.
From a practical perspective, carefully think through all possible states your embedded device will encounter during its operation. For example, in a recent project involving industrial machinery and identifying machine failure from accelerometer data, the development team collected a lot of data while the data was operating under load as intended and when it was in a failure mode. But initially, it failed to collect data while the machine was idling. As a result, the first NN model had difficulty distinguishing between failure and idling. Finally, the NN was retrained with data collected while the machine was idling, and the accuracy of the predictions (e.g., the NN performance) of the model improved dramatically. Bottom line, if real estate is all about location, location, location. Then machine learning is data, data, data.
Step 2: Labeling and Chunking Up the Raw Data
Once the training data is on the Edge Impulse servers, the remainder of the work to train a NN model (aka an “Impulse”) occurs on the Edge Impulse website via a web browser. First, the datasets we collected must be labeled with the output state that each particular dataset represents. This is accomplished by simply editing the “Label” tag for each individual dataset that was collected. Using the aforementioned industrial machinery example, a third of the data was labeled “Failure,” another third “Normal,” and the last third was labeled “Idle.” Recall that the output of a neural network is not absolute; instead, it is a percentage of certainty ascribed to each possible outcome.
With time-series data (such as accelerometer readings collected over time), it is necessary to “chunk” up the data within each dataset. Like all good problem-solving techniques, breaking up a problem into smaller, more manageable chunks allows one to tackle seemingly insurmountable problems. During this initial phase of NN training, you can adjust some attributes of how the data will be analyzed, including window size, window increase, sampling frequency, and whether or not the data should be zero-padded. In addition, these various attributes can be adjusted to balance the tradeoff between the resolution of the analysis versus the time to complete the analysis.
Step 3: Analyze and Convert the Raw Data Chunks
After the data has been appropriately chunked up, it is time to analyze it by applying an appropriate analysis technique, such as a “Processing Block.” This takes the raw data and converts it into a format that can be used by the NN classifiers downstream in the training process. Edge Impulse offers several different analysis techniques depending on the type of data to be analyzed.
Step 4: Classify the Data Chunks, Run the NN Classifier
Once we have the raw data converted into a usable format and understand how to extract characteristics from our datasets, it's necessary to train the NN to learn from those characteristics so it can classify test and operational datasets appropriately. In other words, all datasets that represent a system failure should be classified as such. Likewise, all datasets representing normal operations should be classified similarly. This is accomplished with the application of a so-called learning block. Like with processing blocks, there are various learning blocks that can be applied depending on the type of data. For example, for rapidly fluctuating time-varying data, such as the datasets from our example, the following learning blocks are available:
Various parameters of the signal processing algorithms can be tweaked to fine-tune the performance of the learning block. By adjusting the parameters such as cutoff frequency and Fast Fourier Transform (FFT) length, a balance can be achieved between processing time and peak Random Access Memory (RAM) usage. Edge Impulse even provides a performance estimate of processing time and RAM usage when running on the target embedded platform.
Lastly, the settings used to control the output of the NN classifier can be altered before finally generating the neural network model (aka impulse) itself. Parameters that can be adjusted include the number of training cycles, learning rate, validation set size, and the number of neurons for the intermediate layers of the network between the input and output layers. The ability to alter these parameters is crucial in preventing a common data science problem known as overfitting, which occurs when a model works perfectly with the training data but fails miserably when exposed to new data.
Step 5: Test the NN Model
Overfitting is not an uncommon concern for developers of machine learning algorithms. To ensure that the model is made sufficiently generic, it is necessary to test the neural network that Edge Impulse has generated against independent test data. The same techniques that Edge Impulse offers to collect training data can be used to collect test data. In addition to classifying previously recorded test data, it is also possible to have data streamed from the test device and classified in real-time on Edge Impulse servers. Designers can use either a direct connection in the firmware powered by the Edge Impulse Application Programming Interface (API) or the data forwarder proxy to get data from the sensor to the cloud.
Step 6: Deploy the NN Model
After the neural network achieves satisfactory results against training data, it’s time to package the NN model into a software library that can be deployed on microcontroller-based systems. Edge Impulse makes this an incredibly straightforward process. First, the model can be placed under version control so that future refinements can be compared to past models should that be needed. Next, the model can be turned into “turnkey ready” firmware for various embedded system development boards.
For development boards that Edge Impulse does not directly support, it is still possible to generate generic libraries, including the model files for system architectures based on C++, Arduino, WebAssembly, TensorRT, and STM32Cube.MX CMSIS-PACK. Before the library or firmware is generated is also possible to run optimizers to achieve either speed or memory-usage optimizations depending on the hardware specifications that the NN model (aka impulse) will run on. In addition, impulses based on the sensor data being sent as 8-bit integers or 32-bit floating-point numbers are possible.
Impulses can also be run on embedded systems running Linux OS thanks as Edge Impulse also provides Software Development Kits (SDKs) based on C++, GoLang, Node.js, and Python. It is also possible to run impulses on Windows and macOS with a C++ library.
Lastly, the impulse can be deployed to a smartphone directly without the need for any additional application being installed on the target device.
For those looking to integrate AI technologies into their next embedded system project, taking a look through the documentation and forums of Edge Impulse is a free and easy way to start understanding ML on the edge. A limited, free version is available for testing the Edge Impulse ecosystem. The key constraints for the free-tier are single developer sweat access, a maximum 20-minute processing time, and a cloud storage limitation of 4GB or 4-hours of data. In addition, an enterprise version is available, payable on a per project basis, which removes the restrictions of the free-tier and provides access to a private cloud and five seats per project.
(Source: putilov_denis- stock.adobe.com)
Welcome to the second blog in our Edge Impulse Fundamental series. In the first blog, we demonstrated the various mechanisms Edge Impulse offers. In this blog, we will take a practical look at the overall Edge Impulse workflow: from data collection and training to firmware deployment on targeted edge devices. To help with this, let’s imagine a real-world example. In this case, let’s assume we are trying to build a device that listens for a “secret” series of knocks and unlocks a door if the correct sequence of knocks is detected. We will leverage the microphone aboard the Arduino Nano 33 BLE Sense development board.
First, let’s download the necessary applications to make all this work. This includes the following:
At this point, we simply need to flash the firmware and launch the serial daemon. Conveniently included in the firmware repository are Windows, Mac OS, and Linux scripts to automate this process for select development boards, including the Nano 33 BLE Sense. If you haven’t done so already, create an account and log in to the Edge Impulse Studio (https://studio.edgeimpulse.com). Then click on the Devices tab to ensure your development board successfully made contact with the Edge Impulse service (Figure 1). With all that done, it’s time to deep dive into the real focus of this article, how to train a new model from scratch.
Figure 1: Edge Impulse provides native support for a wide range of development boards to directly connected to their training and testing environment. (Source: Green Shoe Garage)
For this project, we will use the Nano 33 BLE Sense’s built-in microphone to listen for a distinct pattern of knocks. In order to train the model, we will need to collect two datasets—one that captures the ambient sound with no knocks and one that captures the series of secret knocks. This process is called ingestion. Click on the Data Acquisition tab and look for the Record New Data section.
Some key attributes of note are the Sample Length and Frequency. The sample length determines how long a recording will be made, and the frequency determines the number of samples taken per second. Edge Impulse recommends capturing ten minutes of audio (captured in one-minute chunks), five minutes of just ambient noise, and five minutes of the knock. It is important to remember that capturing one minute's worth of audio data may take several minutes as the amount of memory on your particular development board will constrain you.
After we have the necessary raw data, it must be processed and turned into a neural network (Figure 2). In Edge Impulse parlance, this is called designing an ‘impulse.’ This is a multi-part process where we first define how to chop up the raw data into windows. This is done by specifying two values; first, the window size controls how much time, in milliseconds, each window should last. Second is window increase controls the start time of each subsequent window.
Figure 2: Edge Impulse tools provide a simple manner to review and tag training and test data right from the browser. (Source: Green Shoe Garage)
With raw data broken into appropriate-sized windows, it’s time to transform the raw data into something useful to the neural network training algorithms. This process begins by sending the raw data through the aptly named processing block. Edge Impulse applies a signal processing technique called Mel Frequency Cepstral Coefficients (MFCC) for audio data. Other processing blocks are available, including blocks for images, flattening (slow-moving data like humidity readings), spectral analysis (fast-moving data like an accelerometer), and the ability to create a custom processing block. Several variables can be tweaked with the MFCC processing block, including:
By tweaking these parameters, you can alter how the output of MFCC. The output of these tweaks to the raw audio data is visualized as a spectrogram. The goal of tweaking these parameters is to ensure that the features that make up the knock and no-knock datasets are accurately and efficiently extracted from the raw data. The better results achieved now will help ensure that neural network can ultimately infer when the device is fielded in real-world conditions more easily.
Remember that different data types will use other methods to prepare the data for use in machine learning (ML) algorithms. Knowing which is correct for your data is a big part of the education and experience you will gain working on ML projects.
The massaging of the raw data is followed by a so-called learning block that takes the output of the processing block and uses it to train a neural network model. From the Edge Impulse Studio, select NN (Keras) Classifier from the left-hand navigation window which is suited for categorizing movement or recognizing audio. There is also transfer learning for classifying mages and K-mean anomaly detection for finding outliers in new data. There are also a few parameters to tweak before we run the training neural network model. These include:
NN (Keras) Classifier
With parameters adjusted, click on Start Training; at the end, you will have a trained neural network.
Just like getting an education isn’t the end of professional learning, so too can the effectiveness of the neural network be improved by feeding it real-world data (meaning new data that was not used to train the neural network initially). This is a two-step process on Edge Impulse. First, is a quick test called Live classification, which allows the neural network to get exposed to new data to see how well it performs (Figure 3). One concern is the problem of overfitting, where the neural network responds excellently to the test data but not new, real-world data since the model has sort of “memorized” the test data. The second, more rigorous testing is known as Model testing. Every time live classification is run, the data is added to an ever-expanding test dataset.
Figure 3: The ability to visualize and tune the raw data to improve the efficiency and accuracy of the model is a significant advantage of Edge Impulse over earlier manual processes associated with ML development. (Source: Green Shoe Garage)
Once the impulse has been trained and verified, it’s time to deploy the model back to your device. From an end user’s perspective, the magic of AI occurs during inferencing. That is when a fully trained model is deployed to either a cloud environment or an edge device so it can begin making predictions based on real-world interactions. But to reach that payoff, it requires having an easy-to-use software library that can be integrated into one’s project. Edge Impulse will package up the complete impulse—including the MFCC algorithm, neural network weights, and classification code - into a single C++ library. This will let the model run on low-powered embedded systems that may even lack an internet connection.
Join us for the third part of the Edge Impulse Fundamentals series. In part three, we will explore in detail one of the most crucial steps of the Edge Impulse workflow: impulse design.
(Source: Sikov - stock.adobe.com)
Welcome back to our series on Edge Impulse, one of the major players in the world of embedded machine learning specifically designed to provide developers with the tools they need to integrate machine learning capabilities into edge devices. Among their many functionalities, live classification is an essential feature for real-world testing. The live classification feature allows you to validate your model within the browser with data captured directly from any device or supported development board. Thus, live classification eliminates the need to deploy the model with every iteration of your model. To use live classification, you first need to create an impulse, as we discussed in the previous entries in this series. Recall that an impulse is a collection of data, preprocessing blocks, and learning blocks that can be used to classify new data. Once you have created an impulse, you can connect your device to Edge Impulse and start live classification.
Live classification in the context of Edge Impulse refers to the near real-time, cloud-based processing and analysis of data directly taken from sensors aboard edge devices. When you are in live classification mode, Edge Impulse will continuously stream data from your device and classify it using your model. You can see the classification results in real-time and adjust the thresholds for the classification to improve the accuracy. It can be used to debug your model and identify any problems. Lastly, validating your model with real-world data makes it more likely to perform well for others once deployed.
To enable live classification, you must use one of the supported development boards, a smartphone, or a desktop computer. If you are using a supported development board, it must be connected to an internet-connected desktop computer via USB. The data will stream from the desktop to Edge Impulse ingestion service using the Edge Impulse Command Line Interface (CLI) or Web Serial (WebUSB). The chief advantage of WebUSB is that it can collect data from any fully supported development board straight from your browser without the need to install additional software onto your computer. The data forwarder of the CLI, on the other hand, can be used on any development board beyond those that are officially supported.
Inside Edge Impulse Studio, the live classification tool offers a few settings the developer can tweak. First, you can specify which device to accept incoming data from. If the board has multiple sensors, you can designate which sensor to perform live classification against. Lastly, you can adjust the sample length (in milliseconds) and sample frequency to improve the model performance (Figure 1). Also, recall that every learning block has a threshold. The threshold can be the minimum confidence that a neural network requires or the maximum anomaly score before a sample is tagged as an anomaly. You can configure these thresholds to tweak the sensitivity of these learning blocks. This affects both live classification and model testing.
Figure 1: Live classification tools in Edge Impulse Studio make real-world ML model testing a snap. (Source: Green Shoe Garage)
With the target device connected to the development computer from within Edge Impulse Studio, click the Live classification button. This button is located in the top right corner of the user interface. Once you have clicked on the "Live classification" button, you will need to start streaming data from your device. Your specific method will depend on your device and your development board. Once data is streaming from your device, you will see the classification results in real time (Figure 2). The classification results will be displayed in a table, and they will also be plotted on a graph. Adjust the thresholds for the classification to improve the accuracy. The thresholds for the classification are the values that determine how confident the model must be to make a classification. You can adjust the thresholds to improve the accuracy of the classification.
Figure 2: Live classification results can be reviewed in many ways inside Edge Impulse Studio. (Source: Green Shoe Garage)
Here are some additional tips for performing live classification with Edge Impulse:
Edge Impulse live classification is a powerful tool that can be used to validate and improve the performance of machine learning models for embedded devices. By streaming data from the device in real-time, Edge Impulse allows developers to see how the model is performing and adjust as needed to ensure that the model is ready for deployment and that it will perform well in the real world. In the next entry, we will look at how Edge Impulse supports modern development operations (DevOps) procedures, including version control and secure deployment of trained models from the cloud to edge devices.
(Source: photon_photo- stock.adobe.com)
In the continuously evolving landscape of machine learning (ML) and artificial intelligence (AI), the ability to manage, version, and deploy models efficiently is of paramount importance. Edge Impulse, with its dedication to the realm of edge computing, recognizes these needs and has consequently developed features that make model versioning and deployment not just feasible but also efficient. Let’s delve deep into how Edge Impulse manages these functionalities and why they are crucial for developers and organizations.
Let’s begin by gaining an understanding of the significance of model versioning (Figure 1). There are many essential aspects of versioning that become increasingly important as we move from prototype to production. Edge Impulse provides:
Figure 1: ML model version control is built into Edge Impulse Studio, making configuration management more attainable. (Source: Green Shoe Garage)
For designers, developing ML models requires configuration management that is adaptable and efficiently integrates as requirements change. Recognizing these needs, Edge Impulse has incorporated features that enable effective model versioning:
Once a model is developed, refined, and versioned, the next step is deployment. Deploying machine learning models, especially on edge devices, comes with its unique set of challenges. Edge Impulse’s deployment strategy addresses the following challenges:
Figure 2: Edge Impulse allows for Over-the-Air updates to edge devices via cloud connectivity. (Source: Green Shoe Garage)
While versioning and deployment might seem like distinct phases, they are intrinsically linked in a number of ways:
In the rapidly progressing world of machine learning on edge devices, platforms like Edge Impulse play a pivotal role in ensuring that the development and deployment processes are efficient, manageable, and scalable. By integrating model versioning and deployment functionalities into a unified platform, Edge Impulse simplifies the workflow for developers. It ensures that models are always at their best when making real-time decisions on edge devices.
Furthermore, in a world where collaboration, accountability, and adaptability are becoming increasingly crucial, features like versioning become valuable and indispensable. As more devices incorporate AI and machine learning capabilities, platforms like Edge Impulse will undoubtedly be at the forefront, shaping the future of intelligent, responsive, and efficient edge devices.
(Source: putilov_denis - stock.adobe.com)
The workflow for developing solutions that machine learning (ML) models can be complex. The workflow can be broken into three major phases:
At the heart of each phase is data. Data is the raw material that drives the development of ML models. That data can come from a wide variety of places, from enterprise systems to sensors that drive the Internet of Things (IoT), which will be the focus of this article. Without a large and accessible dataset to draw upon, it is impossible to train an ML model reliably.
That data must first be collected, cleansed, organized, and cataloged before it can be used to train ML models. While it is possible to use pre-trained models, their drawback is that they are likely generic and may need to be refined to work well for your particular use case or operating environment. For example, an audio detection model may have difficulty picking out desired sounds if it is not trained with data collected from the actual location it will be fielded. This is because differences in ambient background noises could affect the model.
Compounding all this is that IoT endpoint devices such as microcontrollers and FPGAs are often highly constrained in terms of memory and processing horsepower. Being able to generate a model that can run on such constrained architectures is an additional challenge. Much progress has been made recently in getting neural networks to run low-power devices with the advent of TensorFlow Lite for Microcontrollers CMSIS-NN from Arm®. Still, collecting new datasets and training new models has not been for the faint of heart. Fortunately, various services are emerging to simplify the data processing, model training, and deployment process as simply as possible. One such service is Edge Impulse, a company based in San Jose, California.
Edge Impulse is a cloud-based service that, in a nutshell, allows developers to connect a variety of embedded platforms to get sensor data from the cloud, use that data to train a TinyML model, and send the model back to the IoT device for inferencing (Figure 1). It does so intuitively using a handful of well-designed tools and workflows.
Figure 1: Edge Impulse is a cloud-based tool that significantly reduces the workflow complexity for embedded systems developers to add machine learning (ML) technology to their products. (Source: Edge Impulse)
Over the next year, we will explore the various software components of Edge Impulse and how they can be leveraged to simplify the development of machine learning algorithms for use in embedded systems. This blog will look at developing a rudimentary model that leverages the accelerometers aboard an Arduino Nano 33 BLE Sense development board.
Getting data from a sensor to the cloud has become a relatively trivial matter thanks to the proliferation of wireless internet connectivity and simplified programming tools such as Application Programmer Interfaces (API). Edge Impulse leverages these advancements to make data ingestion a snap (Figure 2).
Figure 2: Edge Impulse provides multiple ways to bring training and test data into their environment, including .csv files of raw data. (Source: Edge Impulse)
Edge Impulse provides three tools to assist in this first stage of the ML pipeline.
Figure 3: For development boards that lack native Internet connectivity, data can be sent to the Internet-connected host computer via USB and the host computer can send the data to Edge Impulse. (Source: Edge Impulse)
Lastly, for a handful of embedded development boards with onboard sensors, Edge Impulse offers pre-built firmware that sends all sensor data to their ingestion service with minimal setup. The firmware leverages the tools mentioned above. Then in the browser, you can pick out which data you want to use to build your model. We will explore this further in a future edition of Mouser’s Edge Impulse Fundamentals series.
Welcome back to our ongoing series on how developers can leverage the services of Edge Impulse to bring machine learning technology to embedded systems. As a reminder, Edge Impulse is a platform for building, deploying, and managing edge-device machine learning models. One of the key features of Edge Impulse is its impulse design functionality, which enables users to create custom machine-learning models for their specific use cases. In this third chapter, we will explore in detail one of the most crucial steps of the Edge Impulse workflow: impulse design.
But first, let’s define what exactly Edge Impulse means by the term “impulse.” In Edge Impulse, an impulse is a machine-learning model created and optimized for deployment on edge devices. The term "impulse" conveys the idea of a small, self-contained unit of intelligence that can be deployed on a device and run independently. An impulse in Edge Impulse consists of a pre-processing pipeline, a machine-learning model, and deployment code. The pre-processing pipeline includes a series of processing blocks that transform the raw input data into a format the machine learning model can use. This will be the focus of this entry.
In the previous chapter, we demonstrated how users could upload their own data sets from the real world and onto the Edge Impulse platform. Once raw data has been uploaded to Edge Impulse, refined, and tagged; a developer can begin the process of “impulse design” or defining their own custom machine learning pipeline. The first step is to tell Edge Impulse what type of data has been uploaded. The workflow gives users three preconfigured options, including accelerometer data, sounds, or images. It also provides a fourth generic option titled “Something Else” to handle other types of sensors, potentially anything from temperature to light sensors and anything in between.
In general, if the data is not an image, then data will likely be some time series data (Figure 1). In other words, one or more sensor values will change over time in response to environmental stimuli. It is essential to determine if the changes will occur fast (i.e., automotive impact detection) or slow (i.e., the temperature in certain manufacturing processes) over time, as this will be an important consideration later.
Figure 1: The first two tools of Impulse Design are 1) the Input Block and 2) the Processing Block. (Source: Green Shoe Garage)
Once the data type is selected, the developer will be presented with the ML pipeline and options to tweak the input block if required (Figure 2). The content of the input block will vary depending on the data type. For our purposes of exploring embedded systems, we will demonstrate accelerometer data. Per Edge Impulse’s user guide, the following options are available to be tweaked for time series data:
Figure 2: How the settings of the Input Block can impact extracting features from the raw data. (Source: Edge Impulse)
The next decision of the workflow is the application of various “processing blocks.” The job of the processing block is to pre-process and extract features from the raw data that will be used to teach the model. Edge Impulse conveniently provides a library of predefined digital signal processing blocks for different applications. The most common of these include:
Of course, you may have highly unique data or you may have custom digital signal processing algorithms that you wish to employ. In that case, Edge Impulse provides a mechanism by which end users can define custom processing blocks and use them for model training within the Edge Impulse studio workflow. If you want to learn more about employing custom-made processing blocks, please check this article (Figure 3).
Figure 3: Edge Impulse provides many types of Processing Blocks for different types of data such as images, sound, and accelerometers. (Source: Green Shoe Garage)
It should also be noted that processing blocks can be combined together in a pipeline to create a custom data preprocessing workflow. The output of the processing blocks is then fed into “learning blocks.” A learning block is a neural network trained to learn from the processed data outputted from the processing block workflow.
Learning blocks will be the topic of our next entry in this blog series.
In our last entry in the Edge Impulse blog series, we began to deep dive into the Create Impulse workflow by learning about the types and purposes of processing blocks. As a quick reminder, processing blocks extract the unique features from the raw data that will, in turn, be fed into the learning blocks to generate a custom machine learning model that will eventually be deployed onto embedded systems for real-time edge inferencing. There are various types of processing blocks, each suited to different kinds of data, such as images, sounds, and accelerometer signals.
In this entry, we will look at learning blocks and the output block as the conclusion of the create impulse workflow. In Edge Impulse, learning blocks refer to the machine learning algorithms that are used to train models on preprocessed data. These algorithms are designed to discover patterns and relationships from the extracted features found in the data and to make predictions or classifications based on that discovery learning.
Various “pre-canned” learning blocks are available for Edge Impulse users that suit different use cases. They can also be stacked to create different outputs (Figure 1). Some of the main learning blocks available by default include:
Figure 1: The final two stages of Impulse Design are setting up the learning blocks and the output block. (Source: Green Shoe Garage)
As with processing blocks, users can create their own custom learning blocks in PyTorch, Keras, or scikit-learn and bring those into the Edge Impulse training pipeline. Edge Impulse supports a range of popular machine learning algorithms, including neural networks, decision trees, and support vector machines. Understanding the nuances of the various machine learning algorithms is crucial as Edge Impulse users can configure various parameters, which in turn can significantly impact the processing block’s performance. Here is a deep dive into the most common machine learning models:
Overall, the learning blocks of Edge Impulse provide a range of machine learning algorithms that are suitable for different types of input data and different use cases. As a result, users can create machine learning models optimized for their specific needs by selecting the appropriate learning block for a given task.
At this point, the impulse training pipeline has been designed. Now the training data can be used to train the neural network, and we can begin to ascertain the performance of our model (Figure 2). The following steps will vary depending on the learning blocks applied.
Figure 2: Each learning block added to an impulse can be analyzed and tweaked. In this example, the performance of an anomaly detection learning block is displayed. (Source: Green Shoe Garage)
In the next part of the Edge Impulse blog series, we will discuss how to use the Edge Impulse training pipeline to analyze spectral components of the raw data and generate features, anomaly detection, and classifiers.
Privacy Centre |
Terms and Conditions
Copyright ©2024 Mouser Electronics, Inc.
Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc. in the U.S. and/or other countries.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics centre in Mansfield, Texas USA.