Croatia - Flag Croatia

All prices include duty and customs fees on select shipping methods.

Please confirm your currency selection:

Croatian Kuna
Free shipping on most orders over 400 kn (HRK)
Payment accepted in Credit cards only

Free shipping on most orders over 50 € (EUR)
All payment options available

US Dollars
Free shipping on most orders over $60 (USD)
All payment options available

Bench Talk for Design Engineers

Bench Talk


Bench Talk for Design Engineers | The Official Blog of Mouser Electronics

AI Development: What, How and Why? Answers with Charlotte Han Charlotte Han

(Source: metamorworks/

Charlotte Han is a technologist and artificial intelligence strategist. Han, who is a resident of Germany, is founder of Rethink.Community. She's passionate about designing AI for good and humanity. In this Q&A, Han provides answers to our questions regarding AI.

Q: When you look at the way technology develops, which business areas are most likely to introduce AI on a large scale in the coming five years?

A: We will need to follow the money and data to find the answer for this one. Companies usually adopt AI into their businesses because they either want to increase revenue or reduce costs thanks to efficiency.

Data is the lifeline of AI. AI won’t work without data, so the change will first take place in where there is data.

These clues would lead us to the frontline of the business: sales and marketing. These departments are quicker to adopt AI, not only because of their direct impact on revenue, but also their desire in understanding the customers better: who came to the website, who downloaded a white paper, who talked to a sales rep, when, and why did they abandon their shopping cart? AI can automatically qualify leads and prompt the sales rep at the right time to follow up with the prospects. AI can also help provide personalized content and messages for each client and predict customer demands. Sales and Marketing are also lower-risk business areas to be implementing AI because the adoption wouldn’t require too much change in other business functions.

Closely related and not to be overlooked would be the customer support department. If companies are already starting to collect data to understand customer behaviour, it only makes sense to use the insight gained from sales and marketing to provide better customer support, as customer retention increases the customers’ lifetime value. It is easier to make an existing customer happy and stay in a long-term relationship with a company than for a company to acquire new customers. While current virtual agents cannot replace the support from real agents, virtual agents can work tirelessly around the clock and shorten the time to respond.

The other obvious business area is probably supply chain, as it’s continuously examined and asked to improve performance and increase productivity. With the rise of edge computing (or AI at the Edge), analytics are immediately available, and the decisions can be made locally by edge computers, without having to transfer all the data back to a central server for processing and then back again. This greatly reduces latency.

The adoption of AI in the supply chain could rapidly streamline processes and improve accuracy, by introducing robotics and anomaly detection software in manufacturing, for example. Just like how AI can help predict customer demands for sales and marketing, it can also be applied to the supply chain to better balance the supply and demand on a larger scale. This doesn’t limit physical goods. Energy companies are increasingly interested in forecasting demand in real-time to predict surge on the grid and optimize for green energy.

There is also less of a chance to have to deal with the data privacy risks in the supply chain, as most of the data is generated by the processes or machines instead of personal data. We can also argue that post the COVID-19 pandemic, having a robust supply chain under control would be the secret weapon to quickly recover from the economic downturn. Unfortunately, this is where the laggards in adopting AI will suffer.

Q: What advances in technology are necessary to optimize the impact of AI introduction in that timeframe?

A: I don’t think we need to look at the stars in a galaxy that’s far, far away to dream about an advanced technology we haven’t seen on earth in order to implement AI, but we will need to democratize the use of data.

Cloud computing has, in a way, helped, but when 5G networks become widely available, edge computing will supercharge the adoption of AI everywhere. Edge computers are typically inexpensive, further removing the barriers to access AI.

There is also work to be done within enterprises. Sadly, some executives still think having a Tableau dashboard on their iPad equals having data.

The foundation of adopting AI would be to create the infrastructure that allows data to flow through the pipeline. In an ideal world, it’ll work like tap water on-demand: there when you need it. Having a data pipeline or infrastructure is especially important when up to 80% of the data in businesses is unstructured, so having the right architecture that has the ability to collect and ingest data from multiple sources, whether the data is structured or unstructured, will be the first step in any company to start harnessing the power of AI. With this architecture, you’ll be able to quickly process and move the data as you need and get the insight and analytics that accelerates businesses.

Another pain point in adopting AI for company executives is the shortage of internal AI talents. Therefore, the importance of AutoML or AI-as-a-Service would only increase. They can help companies conduct experiments and proof of concept before investing in the right AI initiatives for the business. 

Q: As a learning AI depends on the amount and quality of data available, what do we need to keep this data as neutral and unbiased as possible? Will we need to employ recursive AIs to “scrub” the input for the primary AI?

A: This is a tough one, because even if you have a perfectly trained model that is a top performer, the model could get stale and something that’s called “concept drift” could happen. Concept drift refers to the unpredictable change of the relationship between input and output data, which basically changes the accuracy of the “prediction” the model makes.

On top of that, if the AI is trained with deep neural networks, because there are so many hidden layers in the DNN, it becomes impossible for humans to understand or explain how the DNN comes to this conclusion. This is what we call black box AI. Therefore, the solution is not to create another AI to make the first AI work, if both can’t be understood. The good news is that many researchers are working on tools to help AI explain themselves. This is also why the field of explainable AI is an emerging field in machine learning, aiming to help teams develop interpretable and inclusive models.

Until this date, we still mostly rely on human labelling, so the best answer to this question is actually us humans.

Humans working on the AI projects need to be aware of possible bias problems and collect as much unbiased data as possible. When humans find bias in the dataset from the training process, humans will need to zero out the bias in the dataset. The team also needs to subject their project or product to more transparency and auditing processes so that we can recognize the problem as early as possible. It is important to have different sets of test data to help you ensure your system is not biased. This work to ensure the system is “up to date” is on-going and constant.

We are humans, and each of us comes with our own sets of bias. If the AI team comes from diverse backgrounds, we can all be each other’s checks and balances, so we are more likely to be able to eliminate each other’s blind spots.

Maybe we can also train AIs to examine humans labelling data to find anomalies and point out inconsistencies, if any.

Q: Is it feasible to implement a fixed set of ethical guidelines into a learning AI that has the ability to rewrite its own code? Is this even desirable, given the diverging value systems in different industrial societies?

A: Rest assured, we are still far away from AI programming itself and getting out of control.

While it’s important to have some sort of high-level ethical guidelines that are agreed by the global community just like we have the treaty on the nonproliferation of nuclear weapons, it is unrealistic to think every person, organization and entity will dot the i’s and cross the t’s of all the details in the same set of AI ethics rules, simply because everyone has a different agenda.

When I am a proud owner of a self-driving car (even though we probably won’t need to own cars any more at that moment), I probably wouldn’t want the car to decide to turn the wheel to avoid killing a dog crossing the street, ending up killing me in the process. But would I be morally comfortable as an owner if I know my car won’t blink an eye in killing a dog? Finally, would I buy a car from a manufacturer which decides to take the high moral ground and design the car to save a baby in the wagon, but could end up killing me? I’m not so sure. This is, of course, the famous trolly problem.

Even if we have a set of rules, that will still not work for AI. Let’s just look at the English language: there is no one perfect way to speak the “best English”, because, in reality, the language is organically changed by the people using it, constantly. No one actually follows all the grammar rules.

AI also evolves with the data it’s being trained on.

Rule-based AI cannot scale, because it’s impossible to write all the rules there are. AI is designed to explore all possibilities to find the best optimizing strategy. By design, they are trained to look for loopholes. The more rules we write, the more loopholes the AI will find. 

I think the moral value will be reflected in the design philosophy of the products each company or organization creates, and the consumers will vote with their money. Maybe that’s a very naive and capitalist way to think about it.

The role of the government is still important, even though they are usually too slow in understanding the new technology to be able to properly regulate it. However, we do need governments from global communities to create the “nonproliferation of AI weapons” in our time, except that treaty is really for regulating humans.

« Back

Charlotte Han processes data and computes brand and digital strategies for a living. Thanks to growing up in Asia, becoming American in Silicon Valley, and now living in Europe, she’s learned not take things for granted and to make connections where they may not seem apparent.

She is highly interested in all things tech, especially how technologies can advance human lives. She enjoys networking with the misfits, the rebels, and the troublemakers who aren’t afraid to shake things up and push the boundaries of what is possible.

All Authors

Show More Show More
View Blogs by Date