LiDAR—the laser-based radar known as Lidar, LIDAR, and sometimes LADAR—made early headlines because it was a key enabling technology in developing self-driving cars. LiDAR can identify upcoming objects and determine speed without interference from glare, weather conditions, or harsh environments.
But LiDAR has more diverse uses, as design engineers can attest. It has terrestrial, airborne, and mobile applications.
LiDAR uses ultraviolet, visible, or near-infrared light to image objects based on differences in laser return times and varying laser wavelengths. This is ideal in creating high-resolution maps, with applications in surveying, geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics, laser guidance, airborne laser swath mapping, and laser altimetry. LiDAR's spatial mapping capability creates accurate contoured digital 3-D representations of areas on the earth's surface and ocean bottom. On a smaller scale, some new smartphones and tablets use a LiDAR scanner to measure distances, which comes in handy when shooting photos at night or shopping.
In this week's New Tech Tuesdays, we'll check out LiDAR products from Analog Devices, Intel, and Seeed Studio, which all have diverse applications.
Let's start with the Analog Devices Inc. LiDAR Prototyping Platform, a broad-market prototyping platform. Designers can use the AD-FMCLIDAR1-EBZ working-out-of-the-box platform on FPGA development boards to develop software and algorithms for a broad range of depth-sensing applications. The modular platform is built on pre-engineered systems intended to interoperate, reduce risk, and improve predictability and reliability compared to a one-off design built from scratch. The platform's hardware components consist of a laser transmitter board and an analog front end (AFE) board that plugs into a high-speed data acquisition (DAQ) board, including an FMC-compliant connector interface that enables designers to connect their preferred FPGA board. The platform helps reduce system development time, which shortens the path to a working LiDAR system prototype in applications such as environmental, aerospace/defense, security, and Industry 4.0.
The Intel® RealSense™ LiDAR Camera L515 can generate 23 million accurate depth points per second, making it design-friendly for lots of applications. The high-resolution hockey-puck-looking camera can provide precise volumetric measurements of objects, which comes in handy with inventory management systems such as robotic arms for bin picking, logistics, and 3D scanning. The palm-sized L515 uses a proprietary microelectromechanical systems (MEMS) mirror scanning technology, enabling better laser power efficiency than other time-of-flight technologies. It has numerous capabilities in a small form (61mm x 26mm and 100g) while being power efficient (3.5W). The L515 can capture rapidly moving objects with minimal motion blur with its internal vision processor, motion-blur artifact reduction, and short photon-to-depth latency.
The Seeed Studio TF03-100 LiDAR Long Range Distance Sensor is an industrial-grade device with a multitude of uses. The TF03-100 stands out as an all-weather device that can operate in harsh environments and survey rough terrain or remote areas, making it a good fit for an uncrewed aerial vehicle to gather data for 3D point cloud terrain models. TF03-100 is built into an aluminum alloy enclosure and features IP67 water and dust resistance. Its multiple built-in operating modes can change parameters and configurations to meet different applications. The TF03-100, which has an adjustable frame rate of up to 1KHz, also works with vehicle safety-warning systems to alert drivers of hazards up to 100m, which is ideal for today's sensor-laden smart vehicles.
We run across LiDAR all the time in our cars and smartphones. But it enhances many functions we don't always see, such as camera triggers, luggage handling, parcel delivery, robotics, land surveying, power-line inspection, forestry and farming, and more. For design engineers, it has numerous and diverse applications.
What are e-bikes, and what is their main purpose? E-bikes—electric bicycles equipped with electric motors to assist riders with propulsion—represent a fusion of technology with the traditional biking experience. Their assisted pedaling power makes cycling easier, more accessible, and downright fun. Pedal assistance can be particularly beneficial in covering longer distances, tackling hilly terrain, and arriving at destinations without excessive physical exertion or sweating, making e-bikes a practical option for commuting and urban travel.
E-bikes also offer an environmentally friendly alternative to cars for short to medium-distance travel, helping to reduce traffic congestion and lower carbon emissions. In fact, e-cargo bikes (i.e., e-bikes equipped with baskets for transporting goods) are perceived as an advantageous solution for urban last-mile logistics. Using e-cargo bikes is ideal for low volumes of goods and short distances in urban areas where access to conventional cars may be prohibited.
Lastly, even with motor assistance, e-bikes still provide a form of exercise, as pedaling is typically required to engage the motor, especially on pedal-assist models.
This week, we take a quick look at different types of e-bikes, their regulations, and some of the embedded tech powering them.
While there are various types of e-bikes, they generally fall into two main categories:
With e-bikes making their way into parks alongside traditional bicycles, the U.S. National Park Service (NPS) has established regulations governing the use of electric bicycles within the National Park System. The NPS defines an e-bike as “a two- or three-wheeled cycle with fully operable pedals and an electric motor of less than 750 watts that provides propulsion assistance.” An updated memorandum includes additional requirements that an e-bike may not exceed 100 pounds or reach 20mph when powered solely by the motor, prompting many states to create their own regulations for e-bikes using a three-class system limiting the maximum assisted speed of e-bikes.
The average range of an e-bike—the distance it can travel on a single charge—varies widely depending on many factors. However, most e-bikes can typically cover 40 to 80 kilometers (25 to 50 miles) on a single charge. This range can be influenced by factors such as battery capacity, motor efficiency, riding conditions, rider input, bike load, e-bike settings, and the overall condition of the e-bike, including the age of the battery.
E-bikes with larger batteries can store more energy, providing a longer range. Similarly, the efficiency of the electric motor plays a critical role in how effectively it uses battery power. Hills, headwinds, and rough terrain, in addition to the bike’s carrying load, all play a role in how much the motor needs to work. Also, the more a rider pedals and the less they rely on the motor, the longer the battery will last. Many e-bikes have different modes, such as eco, normal, and high power, impacting how much the motor assists. Over time, the battery’s condition can cause them to lose capacity, leading to decreased range.
In recent years, there have been several notable developments in e-bike technology, including lighter-weight designs with foldable features (Figure 1), higher capacity batteries with smart connectivity, upgraded safety features, and automatic gearing systems.
Figure 1: Significant trends in recent years have been the development of lighter e-bikes with foldable design features. (Source: Tohamina/stock.adobe.com)
Modern e-bikes often feature smart systems that integrate digital technologies with physical components, enhancing user experience with advancements such as over-air updates, connectivity with apps like Bosch's e-Bike Flow App, and tracking personal cycling goals directly through the bike's system.
E-bike safety technology has also seen enhancements, including new anti-lock braking systems that are smaller and lighter, thus reducing accidents significantly. Additionally, there are innovations like digital anti-theft features, alarm systems, and tracking capabilities for enhanced security.
Additionally, the adoption of automatic gearing systems like that of the Enviolo® Harmony™, which has a number of advantages over traditional gearing systems, has greatly simplified the riding experience and reduced the overall maintenance needs of e-bikes while also contributing to user-friendly e-bike designs.
All these technological developments reflect riders’ evolving demands and preferences, offering more efficient, versatile, and accessible options for a broader range of consumers. The e-bike industry continues to innovate, focusing on sustainability, user-friendliness, and technological integration.
This week’s New Tech Tuesday showcases gate drivers from Texas Instruments and Infineon Technologies. Both gate drivers are ideal for half-bridge brushless direct current (BLDC) motor drives and have built-in bootstrap diodes for the high-side capacitor.
The Texas Instruments DRV8300 and DRV8300-Q1 are components designed for three-phase BLDC motor control, commonly used in applications like e-bikes. These devices are essentially gate drivers that facilitate the motor’s control and operation by driving the gate terminals of the MOSFETs in the motor's electronic speed controller.
The DRV8300-Q1, specifically, is an automotive-grade version that offers robustness and higher reliability. It supports up to 100V operation, making it suitable for higher voltage applications. This series is known for its advanced protection features, ensuring enhanced system robustness, which is crucial in e-bike applications where safety and reliability are paramount. Additionally, these devices are designed to minimize noise issues and improve efficiency in motor operation.
Their use in e-bike applications is especially beneficial due to their efficiency in controlling the motor, contributing to the overall performance and reliability of the e-bike. This makes e-bikes equipped with such technology more efficient, safer, and potentially more powerful, enhancing the riding experience.
Next, the Infineon Technologies MOTIX™ 160V Gate Drivers are an advanced solution for motor control, particularly suitable for e-bike applications. These drivers are designed for half-bridge BLDC motor drives, a common configuration in e-bikes and other mobility applications. One of their key features is the ability to support a 100 percent duty cycle operation through a trickle charge pump, ensuring consistent and reliable motor performance. The gate drivers also boast protection features such as undervoltage lockout (UVLO) and overcurrent protection, which are crucial for the safety and durability of e-bike systems.
Furthermore, the MOTIX series emphasizes optimized efficiency and reduced electromagnetic interference (EMI), which are essential for maintaining the performance of e-bikes in diverse operating environments. The built-in bootstrap functionality of these drivers facilitates their use in high-power applications, making them ideal for more demanding e-bike designs that require robust power-handling capabilities.
Overall, the Infineon MOTIX 160V Gate Drivers provide e-bike and other e-mobility applications with enhanced control, efficiency, and safety features, contributing to their improved performance and reliability.
E-bikes represent a fusion of traditional biking with advanced technology, providing an assisted pedaling experience that makes cycling more accessible, enjoyable, and practical. They serve as an efficient, environmentally friendly alternative for urban travel and commuting, offering numerous benefits for a diverse range of users. E-bikes have evolved with advancements in technology, leading to enhanced performance, safety, and versatility. These technologies allow e-bikes to achieve better speed control, more extended range, and overall improved riding experience.
 Eleni Papaioannou, Christina Iliopoulou, and Konstantinos Kepaptsoglou, “Last-Mile Logistics Network Design under E-Cargo Bikes,” Future Transportation 3, no. 2 (June 2023): 403–16, https://doi.org/10.3390/futuretransp3020024.
 “General Provisions; Electric Bicycles,” Federal Register, November 2, 2020, https://www.federalregister.gov/documents/2020/11/02/2020-22129/general-provisions-electric-bicycles.
 “Enviolo® HarmonyTM Fully Automatic Transmission,” EVELO, accessed December 11, 2023, https://evelo.com/pages/automatic-shifting.
News headlines about data breaches have become so commonplace that reaction to a massive theft of data one week is quickly overtaken the next week by accounts of an even more egregious security breakdown.
Hackers have stolen information from every type of organization—even three-letter government agencies once considered impenetrable. Throughout all of this, a useful lesson lies less in the notion that no one is immune and more in an important consideration: Security threats and their mitigation are a constant struggle involving not just cyber experts but everything and everyone that touches data. In 2018, one of the greatest gaps in data security lies in appreciating that security is an organizational problem that needs to combine technologies, practices, and policies at each level of the system, whether in enterprise IT or spread through a cloud-based IoT application.
Take data encryption for example. Developers recognize the fundamental need for cryptographic methods to ensure the integrity of data and metadata passed across networks and stored on hosts. Technologies such as elliptic curve cryptography have gained increased acceptance with their ability to provide the same level of security as older crypto approaches but with much shorter key lengths and faster solutions—important considerations for resource-constrained IoT devices. Yet, even the most robust crypto algorithm cannot ensure security without accompanying policies for ensuring the protection of crypto keys throughout the key life cycle, including key creation, device provisioning, and even key revocation.
Use of robust technologies, practices, and policies for cryptography are necessary for security but are by no means sufficient. The overall integrity of an application also requires assurance that data suppliers and consumers are authorized participants in the overall data workflow. This assurance takes the form of authentication protocols such as transport layer security, elliptic-curve Diffie–Hellman, and others in widespread use on the Internet and in web applications.
On the Web, authentication is typically limited to host authentication to assure users that they are in contact with the intended host. Although this one-sided authentication might be satisfactory for web applications, IoT applications typically require mutual authentication, where both IoT device and host each validate the identity of the other. Even so, developers need to combine authentication technologies with suitable practices. For example, authentication protocols might allow reuse of the same session key from session to session—a practice that exposes devices and hosts to man-in-the-middle attacks and session hijacking as recently documented by Carnegie Mellon University's Computer Emergency Response Team (CERT).
Proper encryption and authentication might still not be enough to ensure the validity of data generated by an IoT device, aggregated by an edge device, and eventually consumed by a cloud-based application. Bad actors can exploit device software update delays to install corrupted versions under their control. Thus, devices and hosts might be using recognized crypto keys and authentication practices but the software running on those systems might itself be untrustworthy.
Secure over-the-air (OTA) updates and secure boot methods are meant to protect against these attacks but vulnerabilities can exist at each layer of the software stack. Ideally, developers employ sufficient security measures to ensure the use of valid software at each layer of the underlying software, thereby creating a robust root of trust for all other security features, software applications, and data operations. In practice, however, building this root of trust can fall short in IoT implementations due to a combination of factors ranging from limited device resources for performing security operations to limited understanding of proper security development practices.
With its Device Identifier Composition Engine (DICE) specification, the Trusted Computing Group proposes a multi-phase approach that uses secrets associated with each phase of the boot process to create a root of trust even in resource-constrained devices. An emerging class of hardware devices already support DICE and work with complementary cloud services to help harden security.
Cryptography, authentication, and trusted devices can serve as key enablers for security. Improperly executed, however, those same factors can present additional threat surfaces. Indeed, the development and deployment of any smart device presents multiple threat surfaces, and more so when built into IoT applications. Few applications share the IoT's expansive development on separate communities of developers, technicians, and users. Each participant in the chain maintains a critical role in successful deployment and operation of these complex applications and holds responsibility for maintaining secure practices within their purview, including avoiding exposure to the social engineering-based attacks underlying the most infamous breaches.
The good news is that the industry is beginning to recognize the expansive and collaborative nature of system security. In its recent Security Manifesto, ARM calls for a shared sense of responsibility among technology users and providers alike for reducing the effectiveness of cybercriminals. In 2018, a deep appreciation of the implications of shared responsibility stands as a significant hurdle for achieving security. By approaching security as more than just a technological problem, the industry can begin creating an environment where bad actors find fewer opportunities for compromising securely connected systems.
The evolution of human-machine interface (HMI) solutions continues daily. The technology is enhancing human interaction with devices, opening the door for user-experience solutions that can create greater access and function.
Growing up, we got a peek at these HMI designs in science-fiction movies and TV programs. We've watched Capt. Kirk (Star Trek) voice commands into virtual assistants. We've watched Tony Stark (Iron Man) hand swipe a holographic interface into the air. We've seen Luke Skywalker (Star Wars) use neuronal control to operate a synthetic arm.
But now, we're seeing so much developing into scientific fact. Sensors featured in touch-free gesture control are found in modern vehicles and Internet of Things (IoT) devices, making us all feel like we're aboard a futuristic starship.
Design engineers are also working on making HMI functions in vehicles safer. Given the reaction time of drivers, safety must be a critical design factor, driving development in hand-detection sensors. Other applications include the detection of humans in or out of the vehicle and the detection associated with autonomous driving.
These HMI solutions and more are being used or developed today.
This week’s New Tech Tuesday will look at new products from Maxim Integrated, ams OSRAM, and Microchip Technology that target HMI development.
Maxim Integrated's MAX25405 IR Gesture Sensor is a data-acquisition system for gesture and proximity sensing. The MAX25405 recognizes hand-swipe gestures, air clicks, flicks, finger and hand rotation, multizone proximity detection, and linger to click. The 4mm x 4mm device integrates a complete optical system including lens, aperture, visible light filter, and a 6x10 photodetector array. The proximity, hand detection, and gesture recognition functions operate by detecting light reflected from the controlled IR-LED light source with the integrated optical sensor array. The sensor can detect these gestures even when exposed to bright ambient light. A low-power CPU, such as the MAX32630, is required to process the data from the sensor.
The ams OSRAM AS8579 Sensor Interface is used for human-being detection and many other applications. The sensor can detect capacity changes in different applications and measure the relative change of the impedance, dependent on the circuit. The integrated circuit can capture the current of a metal object and apply algorithms to determine the impedance. The data can be read via a Serial Peripheral Interface (SPI) that can also be used for IC configuration.
Microchip Technology's SAM E51 Integrated Graphics and Touch Curiosity Evaluation Kit evaluates the SAME51J20A microcontroller without external tools. The kit accesses SAME51J20A features to integrate the device into a custom design. SAM E51 integrates a touch screen, thin-film transistor (TFT) graphics, and touch surface into a single-chip solution. The kit is supported by the MPLAB® X Integrated Development Environment (IDE). The kit also includes an onboard debugger, 8MB QSPI Flash, and an onboard CAN-FD transceiver.
Our interface with machines is no longer science fiction. We're already encountering HMI development in everyday life. The challenge for engineers is designing solutions that further enhance the human experience as we interact with high-precision and safe HMI. Sci-fi storytellers have provided use-case scenarios—so the sky is the limit.
Over the years, many electronic devices have been miniaturized to a much smaller scale than their original predecessors. For many electronic devices, miniaturizing electronic components (batteries, circuits, etc.) to the micron and then the nano level has been enough to reduce the device's overall size while keeping its efficiency high.
However, devices that use optical components are inherently trickier to miniaturize and enhance because the materials used within them need to be very high quality, and the devices themselves need to be incredibly precise. Over the last decade or so, the use of nanomaterials in optic and photonic devices has significantly increased, and they are now present in everything from optical coatings on lenses to photodetectors, light polarizers, and many more.
Many different properties of nanomaterials enable them to absorb, reflect, and manipulate light to perform the desired effect. These have been exploited for a range of optical and photonic devices (as well as devices that use optical components).
The nanomaterials’ inherent small size enables the components and devices they are used in to be smaller than bulkier counterparts. In addition to the small size, the high active surface area of many nanomaterials can be exploited to manipulate and change the properties of light. As many optical components are used in electronic devices, those that perform both an optical and an electronic function, such as optoelectronic devices, also require highly conductive materials. Many nanomaterials have very high electrical conductivities that can be utilized. The abrasion resistance and toughness of many nanomaterials are also useful. This helps prevent delicate optical components from becoming damaged, which in turn changes the optical efficiency of the components.
One other significant aspect of nanomaterials in optic and photonic technologies is that they exhibit strong light-matter interactions, ensuring that the optical components efficiently interact with light. Many nanomaterials also exhibit a broad optical response and fast relaxation times, and some are efficient enough to be used in terahertz technologies. Even when the optical components remain nanosized—meaning they are used as stand-alone nano components instead of part of a bigger component—they can be easily integrated with other bulkier optical components.
Nanomaterials in optical components range from complete devices that rely on their optical properties to function to components that are used in non-optical devices and to optical coatings used to both protect and enhance the optical properties of the device.
Lenses are used within many electronic devices and large-scale technologies. Lenses are typically too large to be wholly made of nanomaterials. However, nanostructures of different materials can be incorporated into different lenses to improve the optical performance and toughness of the lens. One of the more common options for lenses is to use nanomaterial-based coatings (or specific thin films for higher-tech applications), which also improves the performance and abrasion resistance/toughness of the lens, without needing to directly incorporate the nanomaterial in the lens. This approach is less expensive, as standard lenses can be used that are of much lower cost (small amounts of coating are often not too costly). Moreover, the use of coatings can introduce specific effects to the lens, such as anti-reflective properties.
While basic lenses don’t usually require nanomaterials, the use of nanostructures within lenses—and coatings/thin films—can, aside from basic performance and optical clarity enhancements, drastically alter the properties of the lens. One of the more common ways is to turn a lens into a highly effective optical filter or optical polarizer, both of which are used in a range of technologies.
A few other specific applications of nanomaterials include optical cavities, saturable absorbers, and optical switches for ultrafast lasers, as well as in complementary metal-oxide-semiconductor (CMOS) sensors, optical biosensors for detecting various biomolecules, and wearable optic technologies.
Many of the areas of optics and photonics overlap because they both deal with light, although there are some specific areas where nanomaterials are only used in photonic applications (such as in the processing and sensing of light, rather than in the manipulation of light that concern many optical components).
Design engineers can incorporate photonic nanostructures into different lenses as well. Like many nanomaterial-enhanced optical lenses, photonic nanostructures improve the performance of microscopes. Another big area of nanophotonics is in photodetectors because the electromagnetic absorption properties of many nanomaterials lead to highly efficient photodetectors, which are used in many technologies, including computers. Moreover, because many nanomaterials have a broad absorption spectrum, these photodetectors can detect ultraviolet (UV), infrared (IR), and photons of visible light.
Nanomaterials have been used to create a range of components within fiber optic cables. These include optical filters and Bragg gratings, which are either used on their own within fiber optic cables or to build other components, such as fiber optic anemometers. Many nanomaterials are used to build different parts of fiber optic cables because they interact strongly with the propagating light. Other areas where nanomaterials are used to create photonics-based devices include fiber optic rectennas to convert electromagnetic waves into an electric current, more advanced magnetic recording devices, and ways of improving the light intensity in spectroscopy and solar cell technologies, to name a few.
Optical and photonic technologies have advanced recently, but the need for highly precise components makes their improvement more challenging than many other technologies. In recent years, however, a wide range of nanomaterials have been used due to their strong interaction with light and their ability to manipulate the properties of light. Thus, many more technologies are being developed that use high-tech nanomaterial optics and photonic components.
The power of eyewear has come a long way since its inception. The first eyeglasses were invented in Italy in the late 13th century, revolutionizing the way people with vision impairments interacted with the world. These early glasses were simple convex lenses mounted on frames primarily used to correct farsightedness. Over the centuries, eyeglasses evolved, with improvements in lens technology and frame design enhancing both vision correction and comfort.
Now, what we can expect from a pair of lenses goes far beyond vision correction. The concept of smart glasses marked a significant leap in eyewear technology. Leading the way was Google Glass, or simply Glass (Figure 1), which was introduced in 2013. Glass was one of the first to merge traditional eyeglasses with modern technology. When released, Glass resembled something more like what “The Borg” would wear, for those Star Trek aficionados, displaying information for the user on a head-up display (HUD) much like what you find in many of today’s vehicles.
Figure 1: Google Glass can be controlled using the touchpad built into the side of the device. (Source: https://commons.wikimedia.org/wiki/File:A_Google_Glass_wearer.jpg)
Glass's journey unfortunately didn't align with consumer readiness and market expectations, leading to its decline. In short, consumers were not ready for Glass. However, the evolving integration of advanced technologies is now fueling a renewed interest in the smart glasses sector.
Fast forward to today, and despite the setbacks faced by Google Glass, smart glasses have evolved into more practical and stylish wearables. Companies like Ray-Ban and Oakley have entered the market, focusing on aesthetics and functionality. This interest can be attributed to advancements and fusions in technology that have allowed for more stylish and less obtrusive designs, potentially overcoming one of the significant hurdles faced by Google Glass. Furthermore, there's a growing interest in wearable technology as it becomes more integrated into daily life.
Additionally, advancements in augmented reality (AR) and artificial intelligence (AI) could transform how we interact with our environment, offering real-time information overlays and immersive experiences. The vast potential for medical, educational, and business applications indicates that smart glasses may eventually become prevalent in our daily lives.
Today's smart glasses are not only fashionable but also significantly more functional than their predecessors. Smart glasses are being designed for portability and daily use to enhance and interact with the real world. With smaller displays integrated into the lenses, they can overlay digital information without obstructing the user’s vision when displaying notifications, navigation, or camera functions. Also, smart glasses are generally more lightweight and designed to be worn like regular glasses, making them more suitable for continuous wear and everyday activities.
Meanwhile, AR/VR smart glasses continue to be bulkier, as their application is not intended for use while moving around or performing other tasks. These smart glasses are primarily designed for immersive gaming experiences, offering a fully virtual environment that replaces the user's real-world surroundings with a wider field of view. AR/VR smart glasses isolate the user from their physical environment, while smart glasses are designed to interact with and augment the real world.
Unfortunately, there are privacy concerns surrounding smart glasses, which in part affected the success of Google Glass, and these issues have not necessarily been resolved. Smart glasses present unique privacy concerns compared to other technologies, such as smartphones. They can record audio and video more discreetly, that is, without the visible actions required by smartphones, such as holding up the device. This discretion makes it difficult for others to detect when they are being recorded. Additionally, smart glasses can continuously capture data while worn. Although some smart glasses have security features like file encryption, these do not fully address the issue of covert recording in public or private spaces. Furthermore, while the public is generally aware of smartphones' recording capabilities, smart glasses are newer and less understood, leading to heightened privacy concerns.
This week, we highlight two innovative components from FRAMOS and PUI Audio, renowned for their dedication to quality and forward-thinking design. These components represent the pinnacle of modern technology, meticulously engineered for the emerging field of next-generation wearable devices, including advanced smart glasses.
The FRAMOS Sensor Module (FSM) featuring the Sony IMX296 sensor is a compact, high-performance module measuring just 26.5mm x 26.5mm. It is equipped with a Global Shutter sensor, offering a 1.6MP native resolution and a 1/2.9 optical format, with pixel precision at 3.45 x 3.45μm. The module supports a MIPI CSI-2 interface with up to 1-data lane capacity. Designed for seamless integration into various processing platforms, these modules demonstrate remarkable modularity, utilizing standardized connectors and mechanical parts. They encompass a resolution spectrum from 0.4MP to 24MP, with options for both rolling and global shutters, addressing a broad range of imaging needs. Ideal for sensor evaluation in early-stage design, the FSM facilitates comparative analysis and is easily integrated into third-party processor boards, enhancing its utility in diverse technological applications.
Design engineers focusing on smart glasses development can benefit significantly from the FSM-IMX296 Sensor Module with these additional key advantages:
The PUI Audio Piezo Haptic Benders, comprising three distinct models, offer significant advantages for innovative wearable designs, including smart glasses. The AB1270A-LW100 model is notable for its high-temperature resistance, enduring extreme conditions from -40°C to +85°C, making it suitable for wearables exposed to harsh outdoor environments or used in automotive settings. Meanwhile, the HD-PAB2001-LW100 and HD-PAB2701-1 models stand out with their low-profile design combined with high voltage and displacement capabilities, catering to demanding applications like transmission systems and medical devices such as blood pressure or insulin pumps. These versatile haptic benders, compliant with RoHS/REACH standards, are ideal for integration into wearable applications, offering robust performance in various conditions.
The evolution from traditional eyeglasses to smart glasses showcases remarkable technological and design progress. Google Glass, despite its initial setbacks, catalyzed renewed interest in this domain. Modern smart glasses, leveraging augmented reality and artificial intelligence, blend style with functionality, marking a significant leap in wearable technology. However, privacy issues, notably around discreet recording capabilities, persist as a major challenge. Addressing these concerns is essential for the broader acceptance and integration of smart glasses into daily life.
In this evolving landscape, suppliers like FRAMOS and PUI Audio are playing a pivotal role in the development of next-generation wearables.
FRAMOS. “Sensor Modules Help Accelerate Embedded Vision Development.” February 28, 2019. https://www.framos.com/en/articles/sensor-modules-help-accelerate-embedded-vision-development.
Mukhiddinov, Mukhriddin, and Jinsoo Cho. “Smart Glass System Using Deep Learning for the Blind and Visually Impaired.” Electronics 10 (22): 2756. https://doi.org/10.3390/electronics10222756.
Medical technology has been a fascination of mine ever since I was introduced to engineering. I’ve been around medical equipment since I was three years old due to a heart condition, and I used to go into those huge MRI machines not knowing anything about what this technology did. Turns out that it’s a pretty important tool that doctors use to get images inside the body. Not even ten years after the first MRI was taken, the first robotic system was used for a biopsy, thus beginning the history of robotic technology used in medical applications.
The use of surgical robotics wasn’t approved in the US until 1997, when the Food and Drug Administration (FDA) allowed use of the technology for visualization and extraction purposes—only a fraction of its capabilities. Today, robotics help doctors perform minimally invasive surgeries, which allow patients to recover more quickly with less scarring and reduced risk of infection.
The goal of using robots in surgical applications is complete autonomy. However, this quickly shifted to robotic systems that are controlled by the surgeon. The da Vinci system was the first FDA-approved controllable system used for medical robotics in North America. This system set the surgical robot standard and continues to grow to more than 6,000 units and 8.5 million operations worldwide1. For minimally invasive surgeries, the da Vinci system allows the surgeon to make precise incisions efficiently, effectively, and with fewer risks of error. The da Vinci system also eliminated the need for an assistant in the operating room, as engineers added a third arm equipped with the tools necessary to complete the procedure.
Besides surgical robots, healthcare facilities have also implemented robotic systems such as exoskeletons and delivery robots to help both patients and doctors in their respective activities. Exoskeletons help rehabilitate patients with mobility issues primarily below the waist. Exoskeletons utilize sensors to detect small electrical signals and respond with movement2. At this time, mobile delivery robots have limited uses but currently assist in transporting medication to patients, reminding the patient to take their medicine, and delivering samples to and from the lab.
Nanorobotics is the next step in creating life-changing technology for the medical industry. These microscopic robots will be able to travel and think autonomously to find and fix health issues in the body. These tiny bots can be the size of a human hair and will be able to connect and form any shape necessary for treatment. In terms of the sensors used in these nanorobots, you might say that teamwork makes the dream work, as researchers continue to use biological reactions to target and determine what job needs to be accomplished.
We aren’t quite at the level of Tony Stark’s nanotechnology, but we are closer than ever. Though the nanorobots won’t be able to generate a full body exoskeleton in seconds, nanorobots will have the ability to find and treat damaged cells anywhere in the body. Ideally, this technology will be used for cancer treatment, as there are very few treatment options currently available. However, these microscopic bots will have many more use cases than just medicine. They could be used as research vessels, allowing engineers and scientists to build or discover microscopic properties that were previously unknown to us. Hopefully, this leads us to a real-life nanotech Ironman suit. That would be awesome.
We're not quite there yet, but the following product could be an essential step forward. This week’s New Tech Tuesday features the CUI Devices AMT13A Modular Incremental Encoder Kits.
The AMT13A series is an industrial-grade encoder with endless industrial use-cases including robotics. Encoders are used to sense motion and determine the position of a shaft using an LED and recording the interruption of light while in use. Having accurate positioning of motors and avoiding unwanted movement of the robot itself is vital to the success of a robot, especially those used for surgical applications. Featuring a rugged design and compact package, the AMT13A series encoder offers high accuracy at 4096 pulses per revolution (PPR) and a low current draw. The modular incremental encoder kit contains various sizes of shaft adapters and a pre-installed alignment tool to get any project fitted with an encoder.
Medtech is an important and developing industry and robots are increasingly playing a key role in lifesaving procedures. Robotics used in medical applications have been around for over 20 years and the benefits have been nothing short of amazing. Engineers are pioneering the new wave of medicine with the use of robotics in a way we have only seen in movies. Today’s surgical robots provide extreme accuracy and efficiency that has transformed the patient’s experience with smaller incisions, less recovery time, and lower risk of infection. Artificial nanorobots are the next step in the evolution of medical robotics. With the implementation of nanorobots, several medical treatments will become minimally invasive and hopefully be able to cure previously untreatable diseases.
Privacy Centre |
Terms and Conditions
Copyright ©2024 Mouser Electronics, Inc.
Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc. in the U.S. and/or other countries.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics centre in Mansfield, Texas USA.