(Source: ArtemisDiana - stock.adobe.com)
Advances in speech synthesis have accelerated adoption of smart assistants like Amazon Alexa, Apple Siri, among others, but sophisticated speech capabilities are edging closer to offering a more vital service. Speech technologies based on artificial intelligence (AI) are evolving towards the ultimate goal of giving voice to millions of individuals suffering speech loss or impairment.
Cutting-edge voice technology underlies a massive, tremendously competitive marketplace for smart products. According to the 2022 Smart Audio Report1 from NPR and Edison Research, 62 percent of Americans aged 18 and over use a voice assistant in some type of device. For companies, participation in the trend for sophisticated voice capabilities is critical—not just for securing their synthetic voice brand, but also for participating in the unprecedented opportunities for direct interaction with consumers through AI-based agents that listen and respond through the user’s device in a natural-sounding conversation.
Speech synthesis technology has evolved dramatically from voice encoder, or vocoder, systems first developed nearly a century ago to reduce bandwidth in telephone line transmissions. Today’s vocoders are sophisticated subsystems based on deep learning algorithms like convolutional neural networks (CNNs). In fact, these neural vocoders only serve as the backend stage of complex speech synthesis pipelines that incorporate an acoustic model capable of generating various aspects of voice that listeners use to identify gender, age, and other factors associated with individual human speakers. In this pipeline, the acoustic model generates acoustic features, typically in mel-spectrograms, which map the linear frequency domain into a domain considered more representative of human perception. In turn, neural vocoders like Google DeepMind’s WaveNet use these acoustic features to generate high-quality audio output waveforms.
Text-to-speech (TTS) offerings abound in the industry, ranging from downloadable mobile apps, open-source packages like OpenTTS, and comprehensive cloud-based, multi-language services such as Amazon Polly, Google Text-to-Speech, and Microsoft Azure Text to Speech, among others. Many TTS packages and services support the industry-standard Speech Synthesis Markup Language (SSML), allowing a consistent approach for speech synthesis applications to support more realistic speech patterns, including pauses, phrasing, emphasis, and intonation.
Today’s TTS software can deliver voice quality that’s a far cry from the kind of robotlike speech of the electrolarynx or that the late Stephan Hawking employed as his signature voice even after improved voice rending technology became available2. Even so, these packages and services are focused on providing a realistic voice interface for applications, websites, videos, automated voice response systems, and the like. Reproducing a specific individual’s voice—including their unique tone and speech patterns—is not their primary objective.
Although some services such as Google’s provide an option for creating a user-supplied voice by special arrangement, they aren’t geared to meeting the critical need of reproducing the voice lost by an individual. For these individuals, this need is indeed critical because our unique voice is so closely tied to our identity, where a simple voiced greeting conveys so much more than the individual words. Individuals who have lost their voice feel a disconnection that goes beyond the loss of vocalization. For them, the ability to interact with others in their own voice is the real promise of emerging speech synthesis technology.
Efforts continue to lower the barrier to providing synthetic voices that can match the unique persona of individuals. For example, last year actor Val Kilmer revealed that after he had lost his voice due to throat cancer surgery, UK company Sonantic provided him with a synthetic voice that was recognizably his own. In another high-profile voice cloning application, the voice of the late celebrity chef Anthony Bourdain was cloned in a film about his life, delivering words in Bourdain’s voice that the chef wrote but never had spoken in life.
Another voice pioneer, VocalID, provides individuals with custom voices based on recordings that each individual “banks” with the company in anticipation of their loss of voice or with custom voices based on banked recordings made by volunteers and matched to the individual who has lost their voice. The individual can then run the custom voice synthesis application on their IoS, Android, or Windows mobile device, carrying on conversations in their unique voice.
The technology for cloning voices is moving quickly. This summer, Amazon demonstrated the ability to clone a voice using audio clips less than 60 seconds in duration. Although billed as a way to resurrect the voice of dearly departed relatives, Amazon’s demonstration highlights AI’s potential for delivering speech output in a familiar voice.
Given the link between voice and identity, high-fidelity speech generation is both a promise and a threat. As with deepfake videos, deepfake voice cloning represents a significant security threat. A high-quality voice clone was cited as the contributing factor in the fraudulent transfer of $35 million in early 2020. In that case, a bank manager wired the funds in response to a telephone transfer request delivered in a voice he recognized but proved to be a deepfake voice.
With an eye on the market potential for this technology, researchers in academic and commercial organizations are actively pursuing new methods to generate speech output capable of all the nuances of a human speaker to more fully engage the consumer. For all the market opportunity, however, advanced speech synthesis technology promises to deliver a more personal benefit to the millions of individuals who are born without a voice or have lost their voice due to accident or illness.
1. “The Smart Audio Report.” national public media, June 2022. https://www.nationalpublicmedia.com/insights/reports/smart-audio-report/.
2. Handley, Rachel. Stephen Hawking’s voice, made by a man who lost his own. BeyondWords, July 15, 2021. https://beyondwords.io/blog/stephen-hawkings-voice/.
The Pokémon Go craze may have tapered off, but key takeaways remain: Users downloaded the mobile app more than 500 million times, and until the craze abated, hordes of fans flocked to malls, memorials, and even cemeteries trying to capture a rare virtual pocket monster or accrue points to progress in the game.
What can we learn here? That Augmented Reality (AR) engages users and enables them to see and do what they couldn’t before. The social game that blended physical and virtual worlds propelled the AR to the forefront of technologies that have the potential to transform industries. What’s more, we can draw on how various industries like medicine have applied AR to ease procedures and educate practitioners.
Venipuncture, the technique of puncturing a vein to draw blood or deliver an intravenous injection, is one of the most common medical procedures. Some patients, though, present extra challenges, including the elderly, burn victims, drug abusers, and patients undergoing chemotherapy. Of the three million procedures performed daily in the U.S., an estimated 30 percent require multiple attempts before finding a suitable vein.
Augmented Reality can help. Huntington, NY company AccuVein uses noninvasive Infrared (IR) technology to scan the target site and display the underlying vein structure. Because the hemoglobin in blood absorbs more red light than the surrounding tissue, the resulting image (Figure 1) shows the veins as a web of black lines on a background of red.
Figure 1: The rechargeable AV400 AR scanner weighs less than 10 ounces and displays the veins underneath the skin. (Image source: AccuVein)
AR vein illumination can increase the first-stick success rate by up to 3.5 times, which leads to increased patient satisfaction, reduced pain, reduced workload, and reduced cost. In a surgical application, vein illumination can help the surgeon to identify the optimal incision site, which reduces bleeding and lowers costs.
For surgeons, AR offers a hands-free and seamless way to access digital information while performing a delicate operation. German technology supplier Scopis has just introduced an application that combines Microsoft’s HoloLens head-mounted display with a surgical navigation system to help surgeons performing spine surgery (Figure 2). The platform provides a hands-free display and a holographic overlay that indicates exactly where the surgeon should operate.
Figure 2: The Scopis surgical navigation tool: (a) Surgeon headset; (b) AR image. (Source: Scopis)
The next stage of development will be to combine data from multiple sources such as MRI (Magnetic Resonance Imaging) or PET-CT (Positron Emission Tomography-Computed Tomography) into a fused AR image that can provide the surgeon with customized information for each procedure.
Beginning in 2019, Case Western Reserve University will be teaching anatomy to future doctors without the use of cadavers. Instead, medical students will use head-mounted displays to view an AR representation of a human body (Figure 3).
Figure 3: An AR headset enables anatomy students to examine a virtual human body and navigate through successive layers of skin, muscle and organs. (Source: Microsoft)
The technology adds a vital element missing from earlier attempts to teach the subject using large touch screens. Users can now walk around a 3-D image of the body skeleton, organs, and veins and view the display from any orientation.
The next stage of development will allow users to interact with the image in real time—rotating the body or “moving” an organ to examine the underlying arteries, for example.
Oxford (UK) start-up Oxsight is testing AR glasses to help visually-impaired patients recognize objects and move around their environment. The smart glasses detect light, movement, and shapes, and then display sensor data in a way that helps the user make the most of his or her remaining vision. Each person’s needs are different, so the display can be adjusted to customize the view. For example, the display can project a cardboard cutout of a person’s appearance, boost certain colors, or zoom in or out.
These are only a couple of examples. Many other medical AR applications are in the “proof-of-concept” stage, including live-streaming of patient visits with remote transcription services; remote consultation during surgical procedures; and assistive learning for children with autism.
What are the building blocks for AR design? Many AR applications are still on the drawing board, but existing wearable and portable medical devices already incorporate many of the core hardware technologies, with Microchip Technology at the forefront. The block diagram for Microchip’s wearable home health monitor design (Figure 4), for example, includes a powerful processor with analog functions, sensor fusion capability, low power operation, and cloud connectivity.
Figure 4: A high-end wearable home health monitor includes many of the blocks needs for an AR application. (Source: Microchip Technology)
Similarly, the XLP (eXtreme Low Power) family of PIC microcontrollers is designed to maximize battery life in wearable and portable applications. XLP devices feature low-power sleep modes with current consumption down to 9nA and a wide choice of peripherals. The PIC32MK1024GPD064, for example, is a mixed-signal 32-bit machine that runs at 120MHz, with a double-precision floating-point unit and 1MB of program memory. Signal conditioning peripheral blocks include four operational amplifiers (op-amps), 26 channels of 12-bit Analog-to-Digital Conversion (ADC), three Digital-to-Analog Converters (DACs), and numerous connectivity options.
Microchip also offers a sensor fusion hub, as well as several wireless connectivity options including Bluetooth® and Wi-Fi modules. Combined with third-party optics and other blocks, these components can form the basis of a low-cost AR solution.
Finally, The Microsoft HoloLens core combines a 32-bit processor, a sensor fusion processor, and a high-definition optical projection system. Other key components include wireless connectivity, a camera and audio interface, power management, and cloud-based data analytics.
AR technologies have already demonstrated their value in medical applications and promise to bring big changes over the next few years to both the clinic and the operating room. Although the optics add a new dimension, many of the hardware building blocks have already been proven in high-volume wearable and portable products.
Proponents of vertical farming paint a seductive picture: Fresh food without pesticides, increased production, reduced water consumption, use of vacant inner-city real estate, and more. Making this vision a reality requires precise control of light, temperature, water, and nutrients, and involves a wide range of IoT technologies, including sensors, robotics, and data analysis.
Contrary to the pastoral vision of golden fields of wheat swaying gently in the breeze, the vertical farm is closer to a factory than a farm (Figure 1). The technology is changing quickly: Commercial vertical farms are capital-intensive, require millions of dollars of investment to get started, and there is stiff competition from greenhouses and other indoor farming operations.
Figure 1: A vertical farm uses technology in a factory-like setting to ensure products of consistent quality. (Source: Mirai)
Vertical farms use technology at every part of the farming process, ranging from nursery operations to harvesting, as Figure 2 shows:
Figure 2: Technology can help improve every stage of the indoor farming process. (Source: Newbean Capital/Local Roots)
Large indoor growers use a wide range of automated devices, from automatic seeders to nursery robots that reposition pots. Since indoor agriculture is still a small market, few purpose-built pieces of equipment exist, so vertical farmers often adapt technologies from other industries.
Heating, ventilation, and air conditioning (HVAC) systems can help create the optimal growing environment by controlling temperature, humidity, carbon dioxide (CO2) levels, air movement, and filtration. Plants grow quicker at higher CO2 levels than the atmosphere’s 400 parts per million (ppm): Tanks of CO2 increase CO2 levels in the vertical farm to around 1000 ppm.
Climate control systems run the gamut, from basic fans and heaters through to multi-functional control systems that incorporate the latest chiller, infrared, and UV sterilization technologies. The optimal system for any farm depends on several factors: Local regulations, farm size, type and locations, crop types, and, of course, budget. In selecting a system, there’s often a tradeoff between capital expenditure (CapEx) and operational expenditure (OpEx): More expensive systems tend to be more efficient and have lower operating costs.
Compared to the traditional farm that gets free energy from the sun, the vertical farm uses artificial light to promote faster growth, and the cost of energy is one of the largest line items on the budget.
Many vertical farms have traditionally used fluorescent lights; these are relatively cheap to buy, but LED lights, with their greater efficiency, consume about 60 percent less power for the same output. LEDs have technical advantages, too: Their light levels can be precisely controlled, and because they don’t emit much IR radiation (heat), they can be placed close to the plants for best light absorption. LEDs can also create the best combination of light spectrum and intensity that gives the most energy-efficient photosynthesis for each plant species.
Mirai in Japan, for example, uses 17,500 LED bulbs that provide the exact wavelengths that various crops need to thrive. According to the company, the new system has reduced power consumption by 40 percent and increased yields by 50 percent.
Longer term, researchers expect that organic LEDs (OLEDs), which use a film of organic compounds to generate light, will eventually become a more economical and efficient option.
The vertical farm is a closed environment, and farmers take strict steps to eliminate pests, pollen, or viruses. The precautions apply to humans, too: Before entering Mirai’s “Green Room,” workers must take hot showers, wash with shampoo and body soap, and change into sterilized work clothes.
Vertical farms don’t use soil as a growing medium to transfer nutrients to plant roots: Instead they use
For both methods, operators continually monitor all the macro- and micronutrients being supplied to the plants (Figure 3). Unlike a conventional operation, the water that evaporates from the plants into the atmosphere isn’t lost; air conditioners recover up to 98 percent of water in a vertical farm.
Figure 3: Vertical farms control every aspect of the growing process, from the placement of the lights to the nutrients applied to the roots. (Source: Aerofarms)
The result of the tight monitoring and control is that a vertical farm doesn’t use pesticides, herbicides, or fungicides; the harvest can be ready in as little as 18 days, half the time of a conventional farm. Vertically farmed crops can contain considerably more vitamins and minerals than conventional produce: Mirai claims that their lettuce has up to ten times more beta-carotene and twice the vitamin C, calcium and magnesium of a standard product. In addition, no rinsing is needed and up to 95 percent is useful in cooking, compared to the usual 60 percent.
What is the next stage of vertical farming? Ruthless cost reduction and even less human involvement, so robots and drones are increasingly used. In Kyoto, Japan, SPREAD has just broken ground on its new “Techno Farm,” a 47,000SF, $175 million dollar factory that’s slated to produce 30,000 head of lettuce a day when it’s complete in 2018.
The farm will use robots that resemble “conveyer belts with arms,” according to TechInsider, to plant seeds, water and trim plants, and harvest them. Compared to SPREAD’s existing Kameoka plant factory, the Techno Farm will cut labor costs by 50 percent and energy costs by 30 percent.
Drones will reduce the number of human operators to monitor large areas. Suppliers to the industry are introducing lightweight drones that are suitable for monitoring crop conditions in large-scale indoor farms: Intel®, for example, offers its Aero Drone, an unmanned aerial vehicle (UAV) development platform that includes a wireless controller and the Intel® RealSense™ camera.
In conventional farming, data analytics provides farmers with both current and historical data on their crops. The information comes directly (from instrumented fields and equipment) and indirectly (via satellites and GPS tracking systems). Data metrics include soil quality and moisture, rainfall accumulation, fertilizer and pesticide levels, and crop yields.
State-of-the-art vertical farms view data collection and analysis as a key element in their business model, employing as many data scientists and engineers as they do agronomists and plant biologists. Aerofarms, for example, collects more than 10,000 measurements during a single growing cycle; the company uses this data to boost yields and quality, as well as to drive down costs towards the point that the vertical farm can be competitive with the best conventional methods.
The vertical farm makes extensive use of technology to grow plants in a factory environment. Just as on a traditional production line, workers and managers monitor and control every aspect of the crop to maximize yields and ensure consistent quality. The data gathered helps improve quality in future generations of “products.”
For more information about electronics in vertical farms, visit Mouser’s Shaping Smarter Cities website
The world desperately needs a new generation of big dreaming inventors
The buzzword “innovation” has become something of a mantra for the technology sector, designed to stimulate the out-of-the-box thinking needed to come up with winning products against ever tougher competition. But innovation is perhaps as wrong a mantra today as it was in the seventeenth century when, according to an article in The Atlantic, innovators were more likely to have their ears cut off and be thrown into jail than celebrated. What we need today, more than ever, is not innovation, but invention.
The difference is subtle but important: The Oxford English Dictionary defines “to innovate” as “make changes in something established, especially by introducing new methods, ideas, or products” (my italics). The dictionary says the word traces its roots back to the Latin “innovat” which means “renewed or altered”.
At the height of the Industrial Revolution, innovation played a subservient role to invention – “the action of creating or designing something that didn’t previously exist” (again, my italics). Between the Industrial Revolution and the early part of the twentieth century, engineers came up with virtually all the major inventions that, in one form or another, underpin our daily lives. Electricity generation and distribution, sanitation, the telephone, flight, the internal combustion engine, television and anesthetics are just a few examples.
The change of focus from invention to innovation happened around the mid-1930s (in part as a reaction to the Great Depression) when companies began to realize that they could get quicker returns refining proven technology to extend its commercial life rather than keep coming up with new ideas. It had dawned on companies that the problem with new ideas was that there was no guarantee of commercial payback.
WWII shook the industrial world out of its creative malaise. A world war is, of course, a disaster for mankind, but if there were any silver linings from the global conflict they came in the shape of antibiotics, jet engines, nuclear power and the fundamentals of computing architecture to name but a few. That momentum, in part maintained by the Cold War, lead to supersonic flight, digital computers, organ transplantation and, perhaps the crowning glory of man’s ingenuity so far, the Apollo program.
But since the late 1970s, according to some leading economists, the pace of invention has declined. They point to the fact that between 1900 and 1980, U.S. life expectancy increased from 49 to 74, but since then it has only crept up another four-and-a-bit years; flying across the Atlantic takes seven hours when Concorde used to do it in half that; the average speed of traffic in our congested cities has dropped to near that of the years when the horse and cart was the best way to get around, and American and European astronauts now struggle to make it into low Earth orbit and even then only by hitching a lift on the venerable Russian Soyuz spacecraft. Nowadays, argue those economists, we have 140 characters, selfie sticks and self-balancing scooters as the prime examples of our inventiveness. (An article in The Economist, while now a few years old, expands on this argument and makes interesting reading.)
It’s a harsh assessment but not without merit. Large technology companies are under increasing pressure to provide fast return on investment for venture capitalists and shareholders. Worse still, endless litigation over patents and a paucity of engineering talent, not to mention global economic stagnation, is hardly the best environment to nurture the inventiveness and creativity needed to come up with the next big thing.
But there is cause for optimism, actually plenty of cause for optimism. Science, Technology, Engineering and Math (STEM) initiatives in western nations are starting to address the skills shortages in those countries. At the same time, China and other Asian countries are educating a generation of extremely talented engineers who are already making their mark in the smartphone, computer, aerospace and automotive sectors.
Better yet, the Maker Movement, fuelled by open source-software and -hardware, is providing a worldwide reservoir of creativity free from the restrictions of corporate bean counters and out of the grasp of patent trolls. By adding a splash of crowdfunding plus some online marketing, it’s possible for a small group of talented individuals to get products to a market that was the exclusive domain of powerful businesses only a decade or so ago.
News headlines about data breaches have become so commonplace that reaction to a massive theft of data one week is quickly overtaken the next week by accounts of an even more egregious security breakdown.
Hackers have stolen information from every type of organization—even three-letter government agencies once considered impenetrable. Throughout all of this, a useful lesson lies less in the notion that no one is immune and more in an important consideration: Security threats and their mitigation are a constant struggle involving not just cyber experts but everything and everyone that touches data. In 2018, one of the greatest gaps in data security lies in appreciating that security is an organizational problem that needs to combine technologies, practices, and policies at each level of the system, whether in enterprise IT or spread through a cloud-based IoT application.
Take data encryption for example. Developers recognize the fundamental need for cryptographic methods to ensure the integrity of data and metadata passed across networks and stored on hosts. Technologies such as elliptic curve cryptography have gained increased acceptance with their ability to provide the same level of security as older crypto approaches but with much shorter key lengths and faster solutions—important considerations for resource-constrained IoT devices. Yet, even the most robust crypto algorithm cannot ensure security without accompanying policies for ensuring the protection of crypto keys throughout the key life cycle, including key creation, device provisioning, and even key revocation.
Use of robust technologies, practices, and policies for cryptography are necessary for security but are by no means sufficient. The overall integrity of an application also requires assurance that data suppliers and consumers are authorized participants in the overall data workflow. This assurance takes the form of authentication protocols such as transport layer security, elliptic-curve Diffie–Hellman, and others in widespread use on the Internet and in web applications.
On the Web, authentication is typically limited to host authentication to assure users that they are in contact with the intended host. Although this one-sided authentication might be satisfactory for web applications, IoT applications typically require mutual authentication, where both IoT device and host each validate the identity of the other. Even so, developers need to combine authentication technologies with suitable practices. For example, authentication protocols might allow reuse of the same session key from session to session—a practice that exposes devices and hosts to man-in-the-middle attacks and session hijacking as recently documented by Carnegie Mellon University's Computer Emergency Response Team (CERT).
Proper encryption and authentication might still not be enough to ensure the validity of data generated by an IoT device, aggregated by an edge device, and eventually consumed by a cloud-based application. Bad actors can exploit device software update delays to install corrupted versions under their control. Thus, devices and hosts might be using recognized crypto keys and authentication practices but the software running on those systems might itself be untrustworthy.
Secure over-the-air (OTA) updates and secure boot methods are meant to protect against these attacks but vulnerabilities can exist at each layer of the software stack. Ideally, developers employ sufficient security measures to ensure the use of valid software at each layer of the underlying software, thereby creating a robust root of trust for all other security features, software applications, and data operations. In practice, however, building this root of trust can fall short in IoT implementations due to a combination of factors ranging from limited device resources for performing security operations to limited understanding of proper security development practices.
With its Device Identifier Composition Engine (DICE) specification, the Trusted Computing Group proposes a multi-phase approach that uses secrets associated with each phase of the boot process to create a root of trust even in resource-constrained devices. An emerging class of hardware devices already support DICE and work with complementary cloud services to help harden security.
Cryptography, authentication, and trusted devices can serve as key enablers for security. Improperly executed, however, those same factors can present additional threat surfaces. Indeed, the development and deployment of any smart device presents multiple threat surfaces, and more so when built into IoT applications. Few applications share the IoT's expansive development on separate communities of developers, technicians, and users. Each participant in the chain maintains a critical role in successful deployment and operation of these complex applications and holds responsibility for maintaining secure practices within their purview, including avoiding exposure to the social engineering-based attacks underlying the most infamous breaches.
The good news is that the industry is beginning to recognize the expansive and collaborative nature of system security. In its recent Security Manifesto, ARM calls for a shared sense of responsibility among technology users and providers alike for reducing the effectiveness of cybercriminals. In 2018, a deep appreciation of the implications of shared responsibility stands as a significant hurdle for achieving security. By approaching security as more than just a technological problem, the industry can begin creating an environment where bad actors find fewer opportunities for compromising securely connected systems.
Rechargeable batteries are an important part of many modern-day technologies. Researchers, manufacturers and end-user companies are always looking to improve the efficiencies of batteries, making them safer, smaller, and more lightweight to fit in the requirements of new technologies—that is, the consumer desire to have higher-powered, smaller, and more lightweight electronic devices. Conventional fabrication methods, electrochemistries, and materials will take batteries technologies only so far, so a lot of interest in recent years has been around using nanomaterials within the electrodes.
Nothing is wrong with current battery technologies—as showcased by the 2019 Nobel Prize in Chemistry, which honored the scientists behind the Li-ion battery—but change is inevitable. Without an ever-changing electronics industry, technological advances made in the past few decades would not have been possible.
While lithium-ion (Li-ion) batteries are ubiquitous nowadays, their efficiencies aren’t overly impressive. They are much safer than other batteries while still having a good energy density, so there is room for improvement. Other types of batteries are gaining traction at both fundamental and commercial levels, but a drive to improve the already-established Li-ion batteries by incorporating nanomaterials into them has begun.
The switch to trialing nanomaterials over bulkier materials is for many of the same reasons that other industries have made the switch (or have looked to make the switch). A number of properties can be harvested for a small amount of nanomaterials being included in a device (or within another material).
Not all nanomaterials are suitable for batteries, as some nanomaterials are inherently insulating in nature. Rather, it is the conductive nanomaterials that are of use in battery systems, such as those of solid-state nature, or those which are incredibly thin (for example, some 2D materials). Luckily, the use of nanomaterials in electrodes is not a completely new invention. It is merely a natural progression to make systems more effective without altering the internal workings of the technology. While the inclusion of nanomaterials might slightly alter the specific mechanisms of how the ions move into the electrodes (because of different-sized/geometry atomic holes) via different architectures, the general operational mechanism of the batteries remains the same. This means that any safety or efficiency issues that arise can be pinpointed much easier than trying to develop a new type of battery from scratch. Sometimes, improving the status quo is better than trying to develop something completely novel.
Many of the nanomaterials trialed have high electrical conductivity, and with this comes a high charge carrier mobility. These properties stem from nanomaterials having a very active surface–and a very high surface area–compared to bulkier materials. In some cases, the active surface is the whole nanomaterial (2D materials). Given that nanomaterials are inherently thin, they are a lot more flexible–even the inorganic materials–than bulk materials, meaning that they are more useful for the batteries used in flexible and wearable technologies (Figure 1).
Figure 1: The inherently thin nature of nanomaterials, as shown in this graphene sheet, means they are more flexible than bulk materials and thereby more suitable for batteries used in flexible and wearable technologies. (Source: BONNINSTUDIO/Shutterstock.com)
Despite their small size, a lot of nanomaterials are very stable and are resistant to high temperatures, harsh chemicals, and high physical stresses. While this can’t be said for all nanomaterials, enough of them are stable and conductive enough for battery electrodes. One disadvantage of nanomaterials is their higher cost because of the more complex fabrication methods required to make them. However, because only a small amount is needed for the same (or better) properties than bulkier materials, the cost is a lot less than many people believe. The small addition of nanomaterials also means that less waste is often produced, and the batteries are more lightweight than when bulk materials are used.
Of all the nanomaterials being trialed, graphene is the front-runner and is found in more prototypes than any other type of nanomaterial. A number of companies now commercially manufacture graphene batteries for different industrial sectors. Reportedly, some big cellphone manufacturers might use graphene batteries in some next-generation phones in the near future (graphene is already used in the cooling systems of some phones).
Graphene exhibits some of the best recorded-values of almost all the properties that nanomaterials can exhibit, meaning that making compromises between the beneficial properties of different materials can often be negated by using graphene. Additionally, given that many batteries already use graphite—many graphene layers stacked on top of each other—it is a much more natural progression than other materials, and systems have been developed that use graphite-graphene electrodes.
Graphene has one of the highest electrical conductivity and charge carrier mobilities known in the materials world. Moreover, it has an incredibly high tensile strength and flexibility (more than most nanomaterials and bulk materials), as well as being stable to high temperatures and harsh chemicals. The culmination of properties means that it can be used in a number of environments without its performance being affected, which is important when the batteries and the technologies they are in can get very hot internally, never mind the external heat they can be exposed to. Single-layer graphene is optically transparent, so it has the potential for being used to create transparent electrodes and transparent conductive films that could be used in batteries of the future that need to be invisible to the user.
Aside from the properties being more suited to batteries than most other nanomaterials, the industry is in a much better place to cope with huge demands should it arrive. Because the properties and potential for graphene is well-understood, the industry has been preparing and growing in size around the world and graphene can now be produced at scale in a number of different forms. The raw material side is more scalable than other nanomaterials, making it a much more viable commercial option.
There is a drive in modern-day society for more efficient and smaller batteries, regardless of whether the application is consumer phones or remote monitoring equipment. Companies are starting to trial nanomaterials in battery systems because they bring about performance benefits and can make the batteries smaller, while not significantly increasing cost because only a small amount is required. A number of companies manufacture graphene batteries, but while graphene batteries are available for the industrial markets, it might take some time for the high-tech consumer markets (phones, tablets, etc.) to adopt graphene (or other nanomaterials) on a large-scale because the status quo is well-tested and any changes in these markets can take a long time to come to fruition.
Designers face a myriad of options for equipment and device user interface, and each one has its place in today’s designs. Today’s sophisticated devices may have user interface panels with as many as twenty components controlling, tuning, or adjusting functions. For many devices, there is a clear advantage to Touch Encoder technology, which shrinks the user interface footprint by combining multiple devices such as touchscreens, pushbuttons, trackballs, and switches into one product—replacing all of these components with one control. Here are examples of standard widgets available to developers to customize their application (Figure 1).
Figure 1: The Touch Encoder’s standard widgets for customizing the application.
Several trends favor the use of newer user interface technology, including Touch Encoder. First is the global nature of today’s marketplace. Manufacturers building products in one country and then selling these products globally must now support multiple languages—often from five to ten different languages per device platform. Now, device manufacturers either need to standardize legends or icons or support multiple languages on the device user interface through multiple legend variants.
Also, as device manufacturers add more functionalities and configuration options to their products, the logistics and costs required to support all of the potential product variants grow exponentially. Not only do manufacturers have to invest in the custom tooling to fabricate multiple configuration options with higher piece part costs, but they must also manage the extra production/aftermarket inventory requirements.
Finally, the widespread use of tablets and mobile telephones is changing users’ expectation of what an interface should look like. Users are demanding a touchscreen interface, even for some non-traditional applications—where people need to perform no-look operations.
The new Touch Encoder technology can be used in a number of markets, especially where reducing the user interface footprint is important in new designs. For example, medical device product engineers can use a Touch Encoder to simplify designs for equipment controls in ultrasound, patient transport, and sterilization equipment control applications. The accompanying photo shows an example of an ultrasound panel in which a Touch Encoder simplified the panel design by replacing existing keypads, a trackball, rotary, and pushbutton switches. (Figure 2) The Touch Encoder provides users a vivid high-resolution screen that is completely customizable, making it easier and less costly to support multiple languages.
Figure 2: On this ultrasound panel, a Touch Encoder replaced four switches, a trackball, and eight buttons.
In off-highway applications, the Touch Encoder is being used for armrest, dashboard, and marine applications where no-look operation, sealed, impact resistance, and CANbus interface is a requirement.
For the industrial market, the Touch Encoder can be used for fabrication and production assembly equipment and for surface mount technology (SMT), as well as appliances. Product development engineers in these markets tend to be cost-conscious looking for a best-valued user interface with as much functionality as possible. For them, removing multiple buttons can save costs on unit price and tooling costs. A simplified interface also helps with training, since workers do not have to navigate through a complex user interface on a cumbersome panel to operate a piece of equipment.
For the digital audiovisual market, the Touch Encoder is a better choice for those performing audio/video mixing tasks. Instead of moving between multiple switches and pushbuttons to tweak audio/video functions, sound engineers can keep their hands in one spot and perform all the functions from one control.
For the vast majority of applications, designers use a USB or CANbus protocol for communicating data between the Touch Encoder and the host processor. In off-highway, industrial and some medical devices (interventional devices, X-ray or CT scanners), a CANbus protocol is used to conveniently communicate with multiple devices on the same bus interface.
New Touch Encoder technology is built to survive harsh environments without sacrificing style or functionality. The Touch Encoder uses a non-contact Hall-effect sensor to provide coded output, determining the position based on feedback from the sensor. Designed with sealed, rugged construction, it is also impact resistant and will survive environments in which other user interface devices would fail. In addition, the Touch Encoder has been engineered to have excellent haptics—providing consistent and crisp feedback to the user, and ultimately showcasing the high quality of the device or end product. The switch detent is repeatable and stable over its life in environments where it can be subjected to wide temperature ranges, electrical noise, vibration, and shock.
To make the new Touch Encoder technology capable of placing so much functionality in one control, ease of application development is paramount. To achieve this goal, the Grayhill Touch Encoder uses an extremely simple tablet-based development kit that allows anyone to develop a custom interface. Industrial designers can personalize multi-touch gestures, generate images, customize the display, and trigger the logic independently, without needing a software engineer to configure the device.
Designers communicate with the Software Development Kit (SDK) through wireless technology. After receiving the development kit, designers simply unpack it and immediately get an overview of how to write a program using the application on the tablet. Within a half hour, users will be writing programs and downloading software, making the entire development process extremely comfortable.
Privacy Centre |
Terms and Conditions
Copyright ©2024 Mouser Electronics, Inc.
Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc. in the U.S. and/or other countries.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics centre in Mansfield, Texas USA.