This week, I wanted to delve even further into the fabric that interconnects the field-programmable gate array (FPGA) to the hard processor system (HPS) and vice versa. I discovered three main bridges that accomplish this task, how they are mapped and addressed, and what components oversee timing and access to them.
To accomplish the interface from HPS to FPGA, there is a protocol called the AXI bridge. The AXI bridge handles the width adaptation and clock control that passes the logic and data from HPS to FPGA and/or FPGA to HPS.
Figure 1. A visualization of the “FPGA Fabric” (Source: Intel® PSG)
There are two types of HPS to FPGA bridges: a high throughput and a low throughput bridge. The high throughput bridge can be 32-, 64-, or 128-bits in width. It’s designed for high-bandwidth data transfers, where HPS is the L3 layer that acts as the master.
The lightweight (or “lower” throughput) bridge is limited to 32-bits only; however, it’s optimized to minimize latency. Its primary function is to pass control and status registers to FPGA. It also diverts low-level traffic from the main HPS to FPGA bridge. A good analogy for this bridge is shown in Figure 1, where two bridges from HPS to FPGA are illustrated: One has a single (32-bit) lane but a higher speed limit, while the other has many lanes and allows for more traffic density (bandwidth) to move in the same timeframe.
The third bridge accomplishes FPGA to HPS data transfers. It’s designed to access the HP slave-interface functions or applications waiting in the HPS program for data input. It’s configurable from 32-, 64-, or 128-bit data widths. It’s also controlled by the HPS L3 master-switch clock.
To meld these bridges together, I began by reading the Intel® Developer Zone’s Golden Hardware Reference Design (GHRD) guide, which gives examples of how to set up the AXI bridges that make up the FPGA to HPS fabric. It was here that I truly learned to appreciate the Configuration Wizards and how powerful they truly are. Within six clicks, I had all three bridges configured and a usable device for configurable memory allocation. As a result, I learned that HPS bridges are mapped to on-chip memory to permit as little latency as possible. However, the FPGA portions are mapped to slave-access memory locations, allowing memory to be written as data is available.
So, what does this all mean? Bridges and layers are something that, as a low-level, low-power microcontroller unit (MCU) experienced person, I’ve had very limited opportunity to use. Nonetheless, these bridges may be familiar to developers who are accustomed to very low-level Arm® MCU programming. Essentially, these bridges are a set of control registers and memory mappings that are accessed at a very high speed and are particularly useful in multi-thread, multi-core systems that necessitate high-speed, multi-purpose data transfers. Of course, the idea of interconnects is common to all MCU enthusiasts. Using interconnects or bridges to offload tasks is familiar, yet accessing them as if they were memory or RAM is novel. Simply put, the L3 layer is that in which the FPGA to HPS fabric is introduced and allows data to transfer from one processor to another. It opens the FPGA to perform the tasks that the HPS would otherwise be greatly bogged down by, thus improving their performance.
Recently, I was given a project that would require me to transition from MCU to FPGA development. In this four-part blog series, I examine how I translated my existing knowledge and experience with MCUs into FPGA development. In Part 1, I examined some advantages and disadvantages of FPGAs, introduced the Terasic DE10 Nano development kit, and explored a few key FPGA planning considerations. Now in Part 2, I explore example code and discover additional useful resources.
The differences between MCU and FPGA are a bit like the differences between a scooter and a car: While both will get you from Point A to Point B, the mechanics are fundamentally different. I thought this analogy was a good in describing the pin mode, pin types, and parallel vs. serial processing, which are quite different in FPGA as compared to MCU.
At first, I started with the Terasic setup and guided practice, but I kept getting stuck. Every time, the compiling process ended with an error. Intent on not giving up, I got another cup of coffee and began going through the Intel Developer Zone website, which offered simpler examples. I was amazed at the reduction in complexity! Here, the examples were easy enough to follow—they even compiled and worked. Once I understood the basics, working through the Terasic examples was much easier. I think this was partly because the compilers were set up and partly because I had more familiarity with them.
The Terasic DE10-Nano combines an MCU—i.e., the hard processor system (HPS)— with an FPGA, so I decided to get started in familiar territory—the MCU. The Arm (“my first HPS”) development felt familiar and simple; it felt unencumbered in the Eclipse IDE; and the Intel SoC development tool helped to make programming the system a breeze. I went a little beyond the “Hello World” example and added another line, which would do nothing more than test run to the capabilities of the compiler—thankfully, I didn’t end up with a compiler fail. The IDE was brilliant and felt very like most HPS IDEs I had dealt with in the past.
Eventually, I had to move on to the FPGA portion, where the fundamental difference is that I can do a lot of things simultaneously (or in parallel), contrary to an MCU’s usually-serial fashion. Adapting to this concept felt a bit more complex and much slower; however, considering that the concept was new, it was not overly difficult. The Intel Developer Zone version was definitely the best set of guides to start with, owing to the prebuilt configuration and guided installation.
The Intel Developer Zone Installation file gets the basics started, then Terasic builds on those new skills adding additional functions and a complete process—rather than a hunt-and-peck, copy-and-paste method of learning. Intel truly introduces what I believe to be the necessary knowledge base for building my own application in the near future, including building block diagrams, timing profiles, and I/O programming. Block diagrams provide clear visual flow during very complex program developments. Timing profiles handle serial versus parallel protocols and bus timing protocols referenced.
Any pin can do anything, which is probably the most well-known feature of the FPGA. (Programming pins was a trip!) The pin allocation manager was really cool, but the table lookup was daunting. Thankfully, in the newest version of Quartus, Terasic has provided a complete map to all the ports and pinouts, using a well-documented naming schema. This makes the coding portion much easier.
Working in a new development environment was uncomfortable. The new process and keyboard shortcuts required an adjustment, and, of course, there were the “usual” setup issues that come with setting up a new IDE. However, the documentation was clear, and the images helped a lot. Intel has even expanded its capabilities to create a Linux setup and a Windows setup, providing Linux emulation for programming, which greatly simplifies the initial process. However, I still had problems with setting up Linux and gave up on my attempt at a self-compiled Linux IDE. Later, I did find a 120-page guide to properly set it up. Go figure!
I enjoyed this exercise, and I feel confident about going forward. However, this phase was very heavy in downloads, configuration, and figuring out what to do next. I’m stoked about the next phase. I plan on introducing different hardware, developing my own software using the example code, and taking advantage of the HPS and FPGA technology. My plan is to run OpenCV software with HPS (using external hardware) and accelerate the video processing with the FPGA portion.
Stay tuned for Part 3! Meanwhile, if you’re an MCU developer making the transition to FPGA, share your experiences, tips, and advice in the Comments!
When I started this project, I understood FPGAs benefits enabled developers to integrate a wide variety of functions into a single circuit then modify these functions down the line—making them reconfigurable and future-proof. But this flexibility led me to wonder: How should I handle interfaces with external components, wiring to interfaces, and the like? With the average design cycle for FPGA being two to three years, and considering the life span of current communications technologies like USB 3.0 to USB Type-C, I was boggled at how an FPGA could really be a benefit.
As I do with most MCU projects, I planned to start with example code and then build my own project out from a variety of examples. But with each example, I found I was less and less familiar and ended up more and more confused. The code was organized into object definitions and function calls, which were recognizable to this MCU guy, but some definitions went into timed higher-order functions and others were just there. The functions, as always, handled performing a task; however, some functions were dependent on others and some were not. The ones that were not called directly were still pivotal to code functionality, as I discovered by attempting to comment them out.
I was befuddled.
By doing “Intel FPGA Setup Cyclone V” searches, I decided to look at examples and “Setup & Go” trainings available on YouTube. As I looked through these trainings, I began to see a common theme (see Figure 1):
With these observations, I went back to reviewing more sample designs and code, and I began to see where the Intel Cyclone V and the HPS FPGA architecture really came to prominence. Figure 1 shows the HDMI command set, which includes a bunch of uncalled portions of code. The code’s uncalled portions do not run in sequence but, rather, in parallel. They function much like callbacks activated by data from a higher-level controller in the HPS calling them. The compiler then adds the capability of passing information over to the MCU, and vice versa. All that’s needed to go between the controller and the MCU is coordinating definitions and a timing schedule.
Figure 1: The HDMI command set, which includes a bunch of uncalled portions of code.
I began to see the program flow was this:
This was a perfect example of using the strengths of both the HPS and the FPGA. The FPGA handled a lot of repetitive math and protocols, while the Linux and MCU portion handled the dynamic elements of the program. This really began to open my eyes to where an FPGA excels.
As I dug into similar MCU-only designs, I discovered the other advantages of an FPGA: It requires far fewer components, and it allows for a much less expensive and capable MCU to achieve very impressive results. For example, an MCU-only design would need a higher A52, or the like, to begin tackling image processing. A GPU would also be necessary to tackle graphics acceleration during processing and more RAM to achieve the entire design. Still, the end-result is a locked, limited upgradable module after the design is complete. With FPGA, the components reside in the same chip, so the production-level PCB design is simpler because it requires fewer component-to-component interfaces. The resulting module is still upgrade and flash capable.
The lightbulb going on over my head could probably be seen for miles around.
The benefits to using FPGA are more than just the ability to integrate a wide variety of functions into a single circuit, minimize interfaces, and modify functions down the line. The FPGA changed the way I think about adding external components to do a task to one that’s more efficient. Rather than thinking, “Perform step one, then two, then three,” I now think in terms of tasks: “Do A, B, C, and when B is done, do D.” Ahhh! The reasons for taking the time to go from MCU to FPGA development are now becoming more and more clear.
Stay tuned for Part 4 in this four-part series!
I’m a long-time microcontroller (MCU) user and enthusiast, with a particular interest in the use of low-overhead modules in enabling greater communication and interface capacity than that traditionally achieved by an individual chip. Some of my favorite personal projects include developing an MP3 player, an alarm clock, a wireless ground moisture control system, a dog activity monitor, and a Bluetooth low energy wireless prosthesis control. In all of these, the MCU provided both ease-of-use and the essential capabilities for gathering information and issuing simple commands.
Recently, I was given a project that not only required me to make the transition to the intimidatingly expansive FPGA—Field Programmable Gate Array—but also required me to move into the much larger and far more capable ARM cortex. This was a place outside the safety and ease I was accustomed to in the mBed environment. In this four-part blog series, I’ll examine how I translated, transferred, and transitioned my existing knowledge and experience with MCUs into the FPGA development environment. In Part 1, we’ll begin with some advantages and disadvantages of FPGAs, introduce the Terasic DE10 Nano development kit, and look at the role of intellectual property in FPGA design planning.
If you’ve been developing projects with MCUs, you’ve probably found that the learning curve is not too steep, that tools are readily available, that development and revision processes are straight-forward, and that designs are very portable. You’ve probably also found, though, that their processing can be limited in terms of complexity, speed, and established interfaces. My relatively simple, personal projects up until the point of my transition were ideal for MCUs because they were neither complex nor had significant processing needs.
FPGAs are integrated circuits that contain logic elements—with programmable building blocks already built in and designed to be super flexible yet highly capable. For example, they can emulate microprocessors or RAM to boost performance, adapt to changes by allowing new standards or algorithms to be implemented, and add communication interfaces—all of which help reduce total system costs and extend product lifecycles. The downside to this capability is that the learning curve is pretty steep. And for MCU developers, the learning curve is compounded by a shift in fundamental methodology in I/O and coding. Instead of a single port and voltage or, likewise, instead of a single port and protocol, FPGA development allows for multiple ports with multiple voltages, can use any protocol, and processes in parallel.
Fortunately, I discovered the Terasic DE10 Nano dev kit, which is built around the Intel Cyclone® V SoC. The Intel Cyclone® V SoC combines FPGA fabric around an Dual-Core ARM Cortex A9. This makes the capabilities of the FPGA accessible by building in several support components, including display and communication ports, buttons and switches, pin mappings and a quick configuration tool, a JTAG debugger, and well-documented examples and guides from both Terasic and Intel®.
When planning for MCU developments, I would determine what interfaces were needed (i.e., SPI, I2C, Wi-Fi, and so on) and then make an informed selection based on voltage, pin counts, communication interfaces, library support, and price. With an FPGA, almost every interface is now possible, but the limiting factor is the number of logic cells, which are used to create the functionality of the port, soft core MCU, or memory element. The trade-off, then, is that the higher the logic cell count, the more capable the FPGA and… the higher the FPGA cost. Although FPGAs usually have a higher initial cost, they offer huge potential power and space savings—as they can combine multiple components into a single component.
I found myself at a crossroads: How will I know how many logic elements I will need for my design? The answer is dictated by the needs of the FPGA intellectual property—called IP, which consists of protocols, functions, my code, and specific tasks that are normally performed by an external module. Almost all FPGA metrics are broken down into logic elements, registers, and total I/O banks (which are the differentiating units of measure for each chip). Here is the definition of each:
The idea of IP’s importance initially escaped me because I hadn’t yet grasped the fact that it described the ability to stand in place of a physical real-world device like an MCU, communications controller, or something I would otherwise use another piece of silicon for.
Out of the box, the DE10 Nano uses the FPGA layer primarily as a very low latency I/O expansion, as shown in Figure 1. All of this comes together to hit another design feature of an FPGA: It can contain most of a PCB in a single chip and, thereby, allow much easier flexibility in future designs.
Figure 1: Terasic DE10 Nano Cyclone V FPGA and hard processor system (HPS) interface layout. (Source: Terasic)
Most of the low-level I/O is controlled and interfaced through the FPGA. This offers the advantage of lowering CPU time spent waiting on a low-level I/O change. This also allows for conditioning or alterations before delivery to the hard processor system (HPS). Of course, what perfect sense this makes because the Cyclone V FPGA is a fabric with a design that can expand interface capacity, accelerate performance, and boost the capability of any paired HPS. In this case, as Figure 2 shows, the HDMI interface is a non-native interface to the HPS, so there aren’t many MCU resources developed for it.
Figure 2: FPGA and HPS interfacing setup. (Source: Terasic)
At first look, I notice several positive aspects:
Overall, I’m liking the FPGAs and find the expansion and acceleration capabilities to be intriguing; however, I wonder about their limitations. How fast can they go? What protocols can they support? How many LE will they consume? Overall, the IDE is simpler than others that I have used and includes amazing documentation to begin. The hardware seems extremely robust and capable, and I’m devising a project that I think will test the limits of the hardware on this board.
What will the project be? And how do we test the hardware limits of this board? Stay tuned for Part 2 of this blog series!
(Source: OG Arduino by Philliptorrone - Own work, CC BY-SA 3.0)
The Italian Renaissance was an incredible two-hundred-year period of human history marked by remarkable advancement in both the arts as well as science and technology. Names like Leonardo da Vinci, Galileo Galilei, and Sandro Botticelli are but a few of the great minds that gifted the world with incredible knowledge, art, and inventions (Figure 1). A few centuries later, a renaissance in electronics would emerge from a small town in Italy named Ivrea. And it all began with a hand-soldered circuit board that would become known globally as Arduino.
Figure 1: Da Vinci's "Mona Lisa" shows the convergence of science and art during the Renaissance period. (Source: By Leonardo da Vinci - Cropped and relevelled from File:Mona Lisa, by Leonardo da Vinci, from C2RMF.jpg. Originally C2RMF: Galerie de tableaux en très haute définition: image page, Public Domain. From Wikimedia Commons)
Before the early 2000s, many engineers and makers interested in embedded electronics cut their teeth on the PIC-based BASIC Stamp platform. BASIC Stamp became popular since the hardware was relatively inexpensive compared to most microcontroller platforms of the time. $139USD (inflation-adjusted to 2022, almost $400) in the 1990s got you a Stamp, parallel port programming cable, and a copy of Stamp Editor. The BASIC-esque programming language (a variant named PBASIC) was easy to learn, but the editor only ran on Windows. Still, the emphasis of BASIC Stamp was on those with a technical mindset. For many of those with an artistic bent who yearned for a way to integrate technology into their art, the BASIC Stamp proved to be less than ideal as it was not programmable on a Mac, and the cost was still a bit high, especially for students.
Around 2003 this would begin to change. Enter the Interaction Design Institute Ivrea (IDII) and a perfect storm of technology and art (Figure 2).
Figure 2: Interaction Design Institute Ivrea (IDII), the birthplace of Arduino. (Source: Arduino)
A group of students and professors whose work revolved around interactive art were frustrated that the technology of the time was more a hindrance than a help in bringing their creative works to life. Some of the earliest people involved were Hernando Barragán, Massimo Banzi, Casey Reas, and Ben Fry. Barragán’s master's thesis was the Wiring development platform built around the humble ATmega128 microcontroller and a handmade circuit board. Banzi was one of Barragán’s advisors along with Reas. Fry and Reas were the creators of the processing integrated development environment (IDE). Hernando would leverage processing as the basis for the original Arduino IDE, which was replaced only recently (September 2022) by the more modern Arduino IDE 2.0. These decisions would lay the foundations for the beginning of the worldwide Arduino ecosystem.
NOTE: It should be noted that since the initial IDE launch, Arduino has also released a command line interface (CLI) and a text-based linter that is useful for those seeking modern professional development tools.
The first commercially available board was the Arduino RS232 featuring through-hole components, a DB-9 serial port, and a DC barrel jack power supply (Figure 3). This design made it easy to hand-solder and reproduce in decent quantities. The simple design, coupled with the decision to release the hardware design under a Creative Commons license (specifically a CC BY-SA license), propelled Arduino into the hearts and minds of countless artists and electronics hobbyists.
Figure 3: The Arduino Board Serial, one of the first commercially available boards. (Source: Arduino/Nicholas Zambetti)
Low cost was another consideration, as the Arduino was initially targeted at art and design college students. The decision to release the hardware and software (the IDE is released under a GNU general public license, version 2) is arguably one of the defining and, at the time, riskiest propositions for the then-fledgling electronics ecosystem. The pending closure of IDII in 2006 and its academic program being subsumed into the Domus Academy in Milan also helped prompt the founders to adopt an open-source model for Arduino.
In 2008, the five founding members of the Arduino project formed Arduino LLC to handle the intellectual property of the Arduino. It was initially envisioned that other companies would manufacture and sell the “official” Arduino boards while Arduino LLC would receive royalties from these sales. Of course, due to the open-source nature of the Arduino platform could take the design files and create both exact duplicates or improved boards (Figure 4). The only stipulation being they could not be named “Arduino” as that name was trademarked exclusively for boards licensed by Arduino LLC. Surprisingly while many derivative boards did find their way to the market, they did not have a significant negative impact on official board sales. Customers did indeed reward Arduino LLC with loyalty, recognizing the superior quality of the hardware and the effort Arduino LLC put into expanding the hardware platform and constantly improving the software development tools. Additionally, the form factor of the original Arduino boards has been faithfully maintained throughout the lineage of the credit card-sized Arduino boards, including the Diecimila, Duemilanove, and the current Uno R3.
Figure 4: The modern Arduino Integrated Development Environment (IDE). (Source: Green Shoe Garage)
It was not always smooth sailing for the Arduino LLC team. In the 2010s, a legal dispute erupted amongst the founders. Without rehashing this dark period, the bottom line was that the trademark Arduino was only good for boards sold in the United States. A company (Arduino SRL) run by one of the founders, Gianluca Martino, held the Arduino trademark in Italy. In response, Arduino began to market Arduino boards outside the United States as Genuino. For a few years, there was quite a bit of confusion in the Arduino ecosystem regarding which boards were compatible with which companies' development software. In 2017, the other four founders regained the trademarks held by Arduino SRL, and once again, Arduino was made whole.
Summer has been in full swing for some time now and this means sunshine, outdoor play, and no homework have filled kids’ days for the last couple of months! But just because the kids haven’t had to go to school for a few months doesn’t mean that learning can’t jumpstart before the school year begins again. In fact, this just might be the perfect season for mixing the outdoors with STEAM (short for Science, Technology, Engineering, Art, and Mathematics) education and do-it-yourself (DIY) maker projects. STEAM learning combined with DIY maker projects is a great way to do some hands-on, project-based learning that will be a great mental warmup for your kids’ just before heading back to school in the fall.
This STEAM-inspired three-article series will explore the available resources to assist you and your kids on a maker’s adventure and walk you through an embedded electronics project. We will cover the design, build, and code for the project as well as discuss how the data from the project can be useful to develop analytical and critical thinking skills. In this first segment, we will direct you to the resources that are available to get you started.
Someone once said “hardware is hard.” However, the reality is that hardware has never been more accessible, regardless of your technical background. Most major manufacturers of microcontrollers offer some sort of development board. These boards allow engineers to prototype concepts rapidly and test a hardware interface with a specific embedded platform. For the purpose of this series, development boards will serve as a shortcut. Instead of worrying about breadboarding a system from scratch (which is admittedly fun and should be tried at least once in your electronics education), starting with a development board ensures that we spend more time on the science involved in developing the bigger picture and less time troubleshooting the power or timing circuitry.
Perhaps the most popular maker-oriented microcontroller platform is Arduino. It is (of course) not the only platform that is available. In fact, if you or your kid already has some experience with Arduino boards, it might be worth using this time to expand your horizons and try a different platform. Check out Mouser’s Open Source Hardware site (Figure 1) for a list of great options, including product lines such as the STMicroelectronics Nucleo, TI LaunchPad, and BeagleBoard.
Figure 1: Mouser’s Open Source Hardware site lists great hardware for summer and year-round electronics projects. (Source: Mouser.com)
I recommend using a development board where the general-purpose input/output (GPIO) pins are at least 5V tolerant. While 3.3V are becoming increasingly popular (though even lower voltages for special applications with a significant need for energy efficiency are now required) for microcontroller platforms, many older, less expensive sensors and actuators popular among makers require 5V.
Speaking of sensors, for prototyping it is good to see if a desired sensor is offered as a so-called “breakout board” (BOB). Just like development boards give you functionality out of the box, a BOB lets you spend more time tinkering at the project level and less time troubleshooting to enable the sensor to work. Though you will pay a little extra money, you will ultimately save on time. Just be sure the microcontroller input/output (I/O) voltage and the BOB I/O voltage are the same; otherwise, an interfacing chip known as a level-shifter will be necessary.
The embedded platform you choose will dictate which development software and operating system you will need to use. Thankfully, Windows is pretty much the common denominator for most platforms, and chances are your computer is running either Windows 7 or Windows 10. Linux and Mac OS support is more abundant than a decade ago, but be aware that these two operating systems do not support all embedded platform development tools.
Here are two helpful hints to try if you plug a development board into your computer and nothing seems to happen: First, Windows machines, especially Windows 8 and prior versions of the operating system, will require Universal Serial Bus (USB) drivers. Check the documentation that comes with your board to get the link to download any necessary software. Second, not all USB cables are the same. Be sure to verify the mini-USB versus micro-USB connectors. Also, some USB cables are for charging only with no data transmission. If in doubt, swap the cable out for a different one to ensure you are using one that has data wires.
Crack open a notebook or fire up your favorite note-taking app, and start by sketching out the idea for your project and taking notes on potential parts. Personally, I like to download the datasheets into my note-taking app as well. One drawback to be aware of when using development and breakout boards is that sometimes they are hardwired in ways that make them incompatible with a specific type of integration. For example, once I was working on a project that involved two sensors that shared the same GPIO pins. It required me to cut the trace on one of the BOBs and solder a wire to a different pin. Checking out datasheets before buying any parts can help you detect these kinds of concerns.
KiCAD and EagleCAD are probably the two most popular platforms for circuit captures and printed circuit board (PCB) layouts. KiCAD is open source while EagleCAD, now a product of Autodesk, interfaces with tools like Fusion 360 for creating 3D-printable enclosures and mechanical components for your project. We will discuss this topic more in part 2 of this series.
I like to use Mouser's Saved Projects feature on their website to build my bill of materials (BOM). Not only does it help me check on parts availability and costs, but I can also easily share a prebuilt shopping cart with fellow engineers and makers who might have an interest in building a similar project. I also like that I get notifications when a part reaches its end of life (EOL), so I can keep my designs up to date. If you already have your BOM in a spreadsheet, you can also check out Mouser’s BOM import tool, which is another useful suite of tools in Mouser’s mobile website and iOS/Android apps. It is great for doing part research while on the go or if you’re out in the field and need the datasheet for a part that you’re troubleshooting. Check out the following resources for additional help with your parts:
Mouser’s Part Search Add-in feature is already installed in Outlook and Excel, allowing you to launch this feature with a single click and without opening any other program (Figure 2). The add-in feature also gives you the latest information, empowering you to make sound purchasing decisions. Each add-in displays the part number, the manufacturer’s name, the part’s description, and the latest pricing and availability at Mouser.com.
Figure 2: Mouser’s Microsoft Office Part-Search Add-in feature is launchable with a single click and without opening any other program. (Source: Mouser.com)
Look for project ideas throughout the school year and inspiration by browsing projects on Mouser’s Open Source Hardware site or others across the Internet. Many of these sites have projects as well as communities of makers and engineers who share news, reviews, and hardware tutorials.
That’s it for now, but remember to check back for when we jump into a step-by-step look at mixing together end of summer fun with hands-on making and STEAM learning in part 2 and part 3. Do you have comments or questions? If so, please be sure to let us know down below!
If you are an engineer or developer tasked with building embedded systems (or software, devices, networks, etc.), one of your highest priorities is—or should be—identifying and minimizing potential data security vulnerabilities. To effectively meet this goal, you need to understand how systems get hacked, and ultimately understand how to “think like a hacker.”
Hacking is all about exploiting vulnerabilities. These could be design flaws, weak access controls, misconfigurations, or many other issues. This article peeks into the mind of a hacker. As you’ll see, their process and mindset in finding and exploiting vulnerabilities are much different from how engineers approach system development.
What goes on inside a hacker’s mind? It’s an entertaining, chaotic, and extremely volatile place, so bring coffee (lots of it) and let's dive in.
The most striking difference between hackers and engineers is in how they address a challenge. Engineers (and scientists, for that matter) are systematic. Problems are first defined and analyzed, a plan (or hypothesis) is formulated, and the hypothesis is tested. Results are then analyzed, as to successes and failures, and conclusions are drawn.
This is the scientific method. It has served humanity well for hundreds of years. Hacking subverts this process. First, there is no plan, but rather a mission. Plans for a hacker are loose and flexible. Remember, a hacker is not building something that must stand the test of time. Rather, they are breaking in. They do not need to satisfy a manager or CEO; they only need to complete a mission.
Where engineers are systematic, hackers are pragmatic and “chain reaction” driven in their methodology. There are similarities, but the core difference is that hackers will go to almost any length to accomplish their mission. Moreover, they can discard results that do not help them, rather than having to explain it to others.
This is sometimes described as list vs path thinking:
In brief, engineers aim to be thorough; hackers aim to be effective. These may seem like minor nuances, but when put into action those nuances have significant implications.
When a hacker is breaking into an environment or system, they typically follow a common pattern, called a kill chain. As the hacker progresses through systems and networks, they will seek out higher degrees of access and authority within the environment (Figure 1). Eventually, when they have sufficient access, they can steal the data they want and/or plant malicious code.
Figure 1: If you can catch hackers early in the kill chain, you can prevent a hack from happening.
Hackers often dwell inside an environment for a long time: 100–140 days on average. The hack experienced by retail chain Target in 2013, for example, took more than 100 days for the hackers to fully execute the hack. If you can catch hackers early in the kill chain, you can prevent a hack from happening.
It is important to note that most hacking is automated using bots. While we may describe these steps in the context of an actual person performing them, bots are what truly do all the real work.
Hackers look at things differently. In particular, they:
It is easy to miss a glaring weakness when you are deep in development. Step back from the development to ask yourself some basic questions about your work. Use the “Five Whys Deep” assessment:
The point of this exercise is to identify obvious weaknesses. A hacker will notice them, much faster than you think.
What is the worst possible scenario? How likely is it to happen? Hackers do not have a moral compass. They will not feel compassion for you when your network or applications are struggling to recover from a disaster. As such, you need to make plans to handle those worse case scenarios.
However, be careful not to get entangled in so-called “zombie scenarios”—that is, disasters that arise due to a ludicrous sequence of events with no response. Most zombie movies are based on this premise.
You must know every possible way anybody or anything can access your system. A hacker will try all of them, many times. You might think your Bluetooth interface is super secure, but there are dozens of ways to specifically exploit Bluetooth that can render them completely insecure. Make sure you aggressively test every interface, regardless of how obscure you made it.
Hackers love data and some types more than others. Data storage is also one of the ways hackers gain persistence in an environment. You must analyze your system’s data:
Hackers understand that humans are the weak link in data security. Not only are we inconsistent and unreliable, but we are extremely susceptible to manipulation. If your system involves humans in any capacity (which it does), then it has weaknesses.
All information security problems generally boil down to human weaknesses. Whether we misconfigure or poorly code applications, humans produce the weakest link. Assume the users will make mistakes, and a lot of them, so give human touchpoints extra attention.
Hackers love obscure technical information and will dig up a random document you put in Pastebin years ago to use that document’s data against your system. This is part of the fun of hacking new systems.
Be cautious about what types of technical data you release into the public. Assume the hackers will get it and analyze it. If you have a product that is being developed in an open environment, then be extra diligent in designing components and features in a secure manner.
Many hackers got their first taste of hacking from the movie Wargames in the 1980s. There is a great scene midway through the movie where a computer scientist rebukes his nerdy colleague for thinking of backdoors to computer systems as secrets: “Mr. Potatohead! Back doors are not secrets!”
His words are just as true today as they were then. Backdoors into applications or devices are common, and hackers will look for them. It is one of the oldest and most reliable techniques to hacking into a system. It worked in Wargames, and it still works today.
While you might deeply care about the security of your system, do your suppliers or partners have the same level of concern? Hackers routinely target third-party components because they can attack a broad set of targets with a single technique. Heartbleed was a perfect example of the danger of insecure third-party components. Heartbleed was a flaw in the OpenSSL implementation. OpenSSL is inside millions of products. That means one vulnerability left millions (probably billions) of devices vulnerable to attack.
If you integrate a third-party component into your system, you inherit all the weaknesses of that component. While the product may belong to somebody else, you will be responsible for its security.
Legitimate user accounts are ultimately what hackers want. Once they have credentials, hackers can escalate their privileges and then move through your system. Moreover, use of legitimate credentials does not usually raise alarms.
While you may not be able to protect user credentials all the time (as they are in the hands of humans), you can still prevent those credentials from being used maliciously. This begins with implementing least privilege rights—that is, users must never have any more access than they need. Furthermore, you should aggressively test systems against privilege escalation attacks.
Is your system part of a larger whole? Could blinding one part of the system leave other parts open for attack? What about feeding your system false data? This was how the Stuxnet malware worked. It fed false information to industrial control systems and then overloaded them. If a hacker wants to steal data from you, or disrupt operations, it may be as easy as overloading your system with too much network traffic.
Denial of service attacks are difficult to stop. When designing your system, you must consider how it could potentially be overloaded and build in mechanisms to either stop or ignore overwhelming amounts of information. Moreover, it is important to always validate that data sent to your system is coming from a trusted source.
As a design engineer, identifying and minimizing potential data security vulnerabilities are primary goals. Hackers approach their work much differently than engineers do; rather than taking a systematic approach, they prefer a kill chain approach where they incrementally and persistently look for vulnerabilities to exploit.
“Thinking like a hacker” requires you to look at the systems you design differently. Part of this means understanding the technical aspects of vulnerabilities and solutions; however, a larger part requires observing the obvious, understanding human errors and indifference, and understanding what hackers seek and the clues they use.
Mouser is committed to helping engineers develop secure systems. Check out the Think Like a Hacker webinar—developed in partnership with Anitian—as well as our Data Security eZine. Also, stay tuned for part 2 of this blog, which examines techniques for building a secure system.
Check out Part 2 now!
Privacy Centre |
Terms and Conditions
Copyright ©2024 Mouser Electronics, Inc.
Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc. in the U.S. and/or other countries.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics centre in Mansfield, Texas USA.