대한민국 - 플래그 대한민국

통화 선택을 확인하십시오:

한국 원화
관세, 통관료 및 세금은 인도 시 징수됩니다.
통화에 따라 ₩60,000 (KRW) 이상 주문 시 대부분 무료 배송
신용카드로만 결제 가능합니다

미국 달러
관세, 통관료 및 세금은 인도 시 징수됩니다.
통화에 따라 $50 (USD) 이상 주문 시 대부분 무료 배송

Bench Talk for Design Engineers

Bench Talk


Bench Talk for Design Engineers | The Official Blog of Mouser Electronics

Suddenly Self-Aware AI Making Itself Smarter By The Nanosecond: What Could Go Wrong? Arden Henderson


The "technological singularity" is almost here. No, really. Well, maybe not. The technological singularity is the event where true artificial intelligence comes into being, self-aware, and begins recursive self-improvement by creating entities better than itself. That's one way to put it. [1] [2] [3]

There's no question in most (thinking) people's minds that the technical singularity event would be disruptive [4], perhaps the most disruptive of all technological advances. Of course, sci-fi writers have explored the AI potential for disaster and good (mostly disaster) for decades. Well-known movies related to robots making better robots have clichés now deeply embedded in the collective pop consciousness.

But how do we get from here to there? (And do we want to? Should we be mindful of the "be careful what you ask for" scenario?)

That's where the field of Artificial Intelligence or AI comes in. And it has never been stronger and busier. [5][6]  AI is the study and creation of intelligent entities which continuously improve and maximize their chances for success. The catch, right now, is that "intelligent" machines, entities, whathaveyou, have to be told what to do. By humans. But advances and understanding of how to build a better thinking machine appear to be coming at ever increasing rates, sometimes resulting in unsettling conversations with learning robots [7].

The notions of "thinking" and "self-awareness" are just part of the big picture. For a long time, humans have mulled over how to test a machine, to see if it has crossed the border from just being programmed to thinking on its own at some level of self-awareness. The Turing Test is legendary. Simply put, the goal is for a machine to engage in small talk so convincingly that a judge cannot tell if it is a human or a machine. Such chatterbots [10] have found their way into children's toys, such as Hello Barbie. This is typical of a now decades-long gradual incorporation of AI technologies for various consumer, business, financial, agricultural, government, and military uses.
Perhaps when the technological singularity happens, it'll be old news due to the familiarity of almost-AI-everywhere by then.

Lots of really smart people have predicted over time when the technological singularity is going to show up. [11] And lots of really smart people have issued dire warnings over the too-fast adoption of AI, and where AI is headed. [12] [13] [14] Are the predictions and warnings valid? Will it be the end of humans? One unwieldy acronym that's popped up in relation to AI is TEOTWAWKI -- The End of the World as We Know It. A different world is not necessarily bad  -- it could be good -- but definitely there will be disruption, and quite certainly dangers if a mass-reproducing AI decided humans were expendable or otherwise in the way. AI, meaning post technological singularity, has surfaced in studies alongside other threats looming on the horizon, such as seas rising three feet, and food shortages caused by extreme weather damage. [15] [16]

At the moment you are reading this, there's no real reason to turn slowly in your chair and eye your smart phone or toaster or home automation system with a wary eye. If your job involves working on industrial robots, just follow best practices and power down before applying wrench. If you are a day trader, you probably have more to worry about with high-speed robot traders (which is more of a regulatory policy issue rather than a technical threat). But it's not like robot traders are going to suddenly spread out into the internet and ooze into gazillions of smart phones, exponentially increasing their computing ability where they suddenly achieve self-awareness.
Right? (See: Skynet)

Yep, chances of that happening are pretty slim. It's a long, long way to the singularity. [17] [18] [19] Machines do what we tell them to do.
Computers run software written by humans. Software always has bugs. So does AI. Just think of the glitches and security breaches that occur almost daily these days. Bugs. Complexity that precludes ever testing it 100% before shipping. Writing good software is hard, not to mention bug-free software. It's a good bet, based on the average quality of any software so far in decades, that the technological singularity tipping point is way off, perhaps to never happen.

Or maybe someday, some software developer in some cube somewhere, comes to work and realizes that a program labored on for over a year by a top-notch team spread over four countries has changed overnight, with fixes to bugs the team hadn't even realized existed. And sophisticated improvements that had never occurred to the team.

A long vacation in that secluded off-the-grid cabin you bought some years ago, way out in the desert or up in the mountains, might be a good idea.

Safety Tip: Don't take your driverless smart car. Walk. Don't run.






For Further Reading:




[1] https://www.singularityweblog.com/17-definitions-of-the-technological-singularity/

[2] https://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

[3] https://electronics.howstuffworks.com/gadgets/high-tech-gadgets/technological-singularity.htm

[4] https://www.ca.com/us/lpg/ca-technology-exchange/technical-singularity-the-ultimate-disruption.aspx

[5] https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence

[6] https://www.aisystems.news

[7] https://glitch.news/2015-08-27-ai-robot-that-learns-new-words-in-real-time-tells-human-creators-it-will-keep-them-in-a-people-zoo.html

[8] https://www.loebner.net/Prizef/TuringArticle.html

[9] https://en.wikipedia.org/wiki/Turing_test

[10] https://en.wikipedia.org/wiki/Chatterbot

[11] https://en.wikipedia.org/wiki/Turing_test#Predictions

[12] https://www.newsmax.com/newsfront/stephen-hawking-elon-musk-letter-artificial-intelligence/2015/07/27/id/659161/

[13] https://www.naturalnews.com/046457_artificial_intelligence_technology_elon_musk.html

[14] https://www.oldthinkernews.com/2015/06/09/billionaire-cartier-owner-sees-wealth-gap-fueling-social-unrest/

[15] https://www.examiner.com/article/study-suggests-artificial-intelligence-most-likely-cause-of-teotwawki

[16] https://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact-Executive-Summary.pdf

[17] https://singularityhub.com/2015/08/16/are-you-a-thinking-thing-why-debating-machine-consciousness-matters/

[18] https://singularityhub.com/2015/08/21/why-this-neural-net-thinks-the-starship-enterprise-is-a-waffle-iron/

[19] https://singularityhub.com/2015/06/19/this-is-what-happens-when-machines-dream/

« Back

Arden Henderson spent at least part of his life toolsmithing in dark, steam-powered workshops of software tool forges long gone, drenched in blood, sweat, and code under the glare of cathode ray tubes, striving for the perfect line of self-modifying software and the holy grail of all things codecraft: The perfectly rendered pixel. These days, when not working on his 1964 Flux Blend time machine (which he inadvertently wrecked before it was built after a particularly deep recursive loop), Mr. Henderson works in part-time castle elf and groundskeeper jobs, chatting with singularities spawned from code gone mad in vast labyrinths of vacuum tubes, patch cords, and electro-mechanical relays. Mr. Henderson earned a B.S.C.S. late in life at Texas A&M. Over the hundreds of years gone by before then and after, he has worked in various realms ranging from petrochemical wonderlands spread across the flat Gulf Coast saltgrass plains, as far as the eye can see, to silicon bastions deep in the heart of Central Texas.

All Authors

Show More Show More
View Blogs by Date