喜马拉雅 PC端文章详情页顶部23-26

Hi, Sophia: World Debut of Female Humanlike Robot on T-Edge Innovation Summer Summit

Sophia can simulate almost all facial expressions of human beings. “She” can even flush since her face is made from a special artificial skin material, frubber. Moreover, Sophia can spontaneously track a human’s face, listen to speech and assemble, generate a natural language speech or response.

(Chinese Version)

Editor’s Note:

Hanson Robotics, one of the most cutting-edge humanlike robots manufacturer, made a debut show of its latest female robot, Sophia on T-Edge Innovation Summer Summit. Everybody at the scene were amazed by Sophia since she could simulate almost all facial expressions of human beings.

Coincidentally, many people, including David Hanson, the founder of Hanson Robotics, found that Sophia look like Zhao Hejuan, founder of TMTpost and CEO of BTmedia group.

According to David, he and his team are still improving the performance of Sophia and are going to officially release the robot this October, but he specially brings a prototype version of Sophia to this year’s T-Edge Innovation Summer Summit.

Sophia can simulate almost all facial expressions of human beings. “She” can even flush since Sophia’s face is made from a special artificial skin material, frubber. David displayed at the summit Sophia’s varieties of facial expressions, explains how his team crafted Sophia and to the audience.

Above all, we’ve taken a series of photos of Sophia, let’s have a look:

There is also a short video for you to have a better understanding of what Sophia can do.

The following is the edited transcript of David Hanson’s speech on T-Edge Innovation Summer Summit:

Hello, everybody,

I’m happy to introduce to you a friend of mine, here, who’s hiding (don’t be shy. This is Sophia. Sophia is a prototype of our first really mass-produced product line, Hanson Robotics. Hanson Robotics develops extremely life-like robots that are designed to become friends with people. This means that they are designed to perceive your face, your facial expressions, understand speech, and model these things: What you think about? What you might be thinking, or feeling and start to build a relationship with you over time. So our goal is to create machines that are truly capable of understanding what people are, who we are and what we care about. So before we go any further, with my presentation about what our technology is about, what our dream is, I would like to just show you what Sophia can do.

Sophia can display a series of natural facial expressions.

So you will see that she can express a variety of emotions with artificial muscles (basically simulating muscles and a face with motors). She can simulate more or less four ranges of facial expressions. She might even get upset.

Right now, what I am going to do is just flip through of the varieties of her expressive animations and let her interact with you. For now I’ll just demonstrating some of the expressive capabilities of her face and emotions. She simulates the muscles of the neck, so she can do natural actions. She has a camera in the eyes, a 3D special sensor, so here we have perceptual capabilities for her to perceive three spaces and a variety of senses. So maybe I should just put on autonomous mode now. With this, so he can actually see my face and respond to my face. In the process, she’s using facial expression technology, that is, my expression. She also can exhibit the facial expressions and respond to them. We have been using Google Voice, but we are also partnering with JIAOWAI here in China so as to provide a Chinese-based speech capabilities for conversational speech, and this is something that we look forward to demonstrating very soon. We can get the Google recognition, but we can’t get JIAOWAI. Fortunately, we’ve got an amazing partner here in China. So I’m now going to click through, let you interact with her afterwards. You’ll get to know her and she’ll get to know you, and I hope that you’ll become friends.

Our technology goes back to a number of years ago, to my PhD studies. Developing robots that can mimic a wide range of facial expressions using an experimental material, that is, nano-technology, based on how human soft tissues turn into cells. So we create the cellular material, which makes more natural expressions and fractions of power acquired by previous materials and their expressions look more life-like. That fraction of expression right away mean that we can put them on walking human-like bodies and make them walk around. It also means that they are less expensive to produce than previous expression robots. We then combine these kinds of technologies into a total system and control them wirelessly from our intelligence software, combining the best-of-breed speech recognition and face perception technologies, so then the robots can answer your questions, recognize you expressions and come to life, like living characters. They don’t have to be perfectly realistic, they can be kind of cartoon-like, they can be pretty much anything that you imagine in computer animation or in fiction. This is like the science fiction that comes to life.

What this robot is doing right now is spontaneously tracking a human’s face, listening to speech and assembling, generating a natural language speech or response. So what we do is we create and craft personality and we also do natural language generation so that it can spontaneously reason some responses and answer those responses. Sometimes we also expertly craft those responses for specific applications. So we can do customerized service, autism therapy, etc. So I’m just going to quickly go through a few concepts. These machines are clearly built for relationships. You have a lot of in science fictions and many applications, but the robots that can relate with us are tapped in, it’s like neuro-hacking. It hacks the human propensity to interact face to face with each other so that we are involved. Some neuro-scientists estimate that more than 70% of the brain’s “circuit tree” is activated during social encounters. We are involved to coordinate with as many minds. So what we can do is use this kind of interface, this natural mimic to human-like interface to simulate what we do as humans, activating those regions of the brain, opening up a social dialogue.

During my PhD studies, I took a lot of neuro-science classes. One of my professors pointed out that as far as she knew, scientists understood less than 1% of what’s transmitted. Humanity and neuro-science understands less than 1% of what we are exchanging. There is a mass amount of data that is going back and forth. If we can train our computers to understand that data, they can understand us. It is a very rich source of data if you have the right kind of AR technology to interpret that data. So what we’ve been doing is developing an interface that elicits that kind of response, that dialogue natural give-and-take. And then the artificial intelligence software that can take that rich data and transform it into and understanding of who we are. We want the robot to understand who we are.

I have background in engineering, art and technology, which gives me an interest in human’s reaction to the so-called “Uncanny Valley”. We don’t understand our reactions to robots or to art for that matter. But what we can do is we can craft it and eventually when we hit the right kind of balance, these machines can be really truly endearing. To some of these robots, like Sophia, people tend not to be particularly upset with this robot. People would sit and have a conversation and reach out and hold its hand while talking with the robot. The smaller Einstein also has a similar effect. Some of these robots in their applications for medical applications or autism therapy also have a similar kind of positive reaction. So there is this concept, uncanny valley. What is interesting to me is that I do believe there is an uncanny effect. Uncanny Valley sys that you get a certain level of realism and people would find it creepy, basically, right? But when you make it perfectly real, people will open up to it. But what I find is that actually the deal is that there is a bunch of aesthetic variables that kind of make it possibly creepy, possibly really endearing, but that balance makes it hypnotic. If you strike the right balance, people will actually attend to it and engage with it. So part of my graduate studies was involved with putting these psychology experiments, testing how people respond to it. People were not appalled to it but were really attracted to it. It’s been a lot of time with the robots. So every application that we’ve tested in numerous early sales we find that people will truly engage with these robots, which means that in that sense, people quite like these robots. You might in another sense say that there is a region in the “uncanny valley” that if you get things just right when you are bridging, like living on the bridge, maybe building up from there.

Some of the applications include where people can’t recognize facial expressions or place them in social context, so for example, with children with autism. What we’ve done is we take some of these human-scale robots and small-scale robots, we did test to them and in some of these applications, we found that the fact is amazing. I mean these kids open up and just attend to robots showing willingness to interact that their therapist haven’t observed previously. The theory is that it activates those dormant regions and social networks in the brain that neuro-typical individuals will use when social interacting, but because of the complexity and chaos of social interaction, people with autism will tend to avoid because she’s just over-stimulating so they sort of shut those systems down and avoid that kind of social interaction. But robots are a little bit more predictable. When interacting with robots, it sort of activates the dormant regions of the brain, opens up the dialogue. Once the regions are stimulated, maybe it’s refreshing or something, because the study shows that social activation seems to also carry over to interactions with therapists, with the parents, so these preliminary studies were very very promising.

Another application is medical. This application includes human-patient simulators. Currently, there is over 1.2 billion dollars for human-patient simulators that are robotic, patients, doctors and nurses and other medical professionals, even military professional have to practice on it in the US. A lot of military will practice on these in trauma situations. Those are very low-fidelity simulators. A lot of study show that low-fidelity simulation and training translates to inadequate training. So the higher fidelity and simulation, the better the training. What we’ve done is we’ve taken these robots that we produced in prototype form. They were very costly we have team in in Hong Kong about a year and a half ago. We teamed up with twelve factories that make expressive dolls, which is very similar technologies in some ways, but their cost phenomenally inexpensive. It’s like miraculous engineering and processing engineering that exists only in China. It’s the only place in the world that I know of that knows how to do this kind of creative development. We combine that with our social expressions artificial intelligence technologies, and we have something that can be made in incredibly affordable prices. So we combine our technology with toy technology and I think we are unbeatable combination.

There are even entertainment applications. For example, people can go up to a robot in a science museum and talk to it. What does it mean to be human? Everybody cares about this question, but scientists can address to it. Robot has become an engaging way for scientists to answer the scientific quest

Disney parks have been improving generators. There are around 4 four dozen theme parks doing more than 20 million dollars revenue, doing animatronics. That’s a lucrative. We can enter this market as well.

We are now experiencing a golden age in artificial intelligence. Artificial intelligence has begun to understand grammar, funny little grammar structure. We have some really good natural language scientists (AR scientists) on our team who have addressed to these problems and the result surprised me. Some of the results have been really good. Beyond what we’ve done on our team, because you know we are only a few dozen people. You have projects in the world that have shown stunning results lately, like IBM, Watson, etc. They can understand these things really well to the point they can answer questions, solve little problems. Were they to cost not much, they will have to be based on a server. This is also the fundamental architecture for our software. We take our software and run it through this interface which gathers rich data that increases social context of this kind of behavior. We’ve created an open API and open source tools so that we can integrate lots of components from companies beyond Watson.

The really cool thing is that I believe this is a new stage of revolution because what we are doing is forging a new culture. We release this low-cost product not just for these high-value applications that I’ve described, but also for entertainment, art, these little characters come to life and get to know you and even become your friend and help you. For example, they can tell you what the weather is, help with your homework, and answer questions. This could be an amazing art form, and I think people would just fall in love with them

We don’t know if these machines is going to be safe. People might be speculating that way. Who knows, my feeling is we’ve got to raise them above us. If we raise them with us, they come to understand us. If the AI system cars about us, then we are safe. We are developing this with this cloud infrastructure, and we call this the “dream time”, so within this, thoughts and experiences crafted by AI technicians, then we test this with people, and then it’s learning, building larger models. Imagine you have these millions of people deployed, most of them are good, kind. That’s what this “dream time” is going to learn. However, only a small fraction of information are from cruel people. What that means is that if we cultivate this correctly, raise it correctly, and then we will have a core of good and understanding, and would be able to deal with those passionate situations compassionately, rather than building this internal ridge. We all encounter cruelty in our lives, but usually something don’t come to us unless the preponderance of our life experiences are cruel. You have to pass the point if it is cruel. This is the way that AI and API are deeply safe friendly and wise even.

Afterwards, I’ll let you interact with her. For this particular model, this is an early model. We’ve commercially deployed a 48 unit over the years, crafted in our garages. But manufacturing didn’t advance that easily. You know, they were pretty costly, so this is the first unit that’s been designed to get into the manufacturing. Next version of this robot will be even more realistic. Art, engineering are all involved crafting this model. There are 48 major muscles in human face. Sophia can almost simulate all 48 muscles and all facial expression, actions in face-to-face interactions between different people. Sophia’s face is made from frubber material so that she can flush. Moreover, we are using intuition, which is central to make unique technology, meaningful to human-human interaction. Smart phones are the fruit of a lot of scientific discoveries. Ultimately, it’s this aesthetic discovery that makes it intuitive. We are working with some labs. The body of Sophia was made by a Tokyo-based company. My company developed the face. If we can enact the human genius of artists, utilize the robotics technology and transform it into people’s daily life, then people will care about it even more. So it’s not just AI, not just robotics technology, it’s crafting into this cultural object, and combined with market dynamics, and then you have this kind of natural selection that’s grooming the technology, the artistry and the technology. Underlying AI naturally select it from market forces to become more compassionate, friendlier and more valuable to us. Then you have a process of “converging evolution” on human-like values. For the AI software, we are putting infrastructure in place to allow “converging evolution” to happen. We think of it not just AI, but the frubber that will result in super human intelligence, what I called genius machines. So I think in 10 to 15 years, it’s plausible, at least plausible, or likely to have machines that surpass humans an almost any capacity that we think of as creativity, the ability to invent things, the ability to seek out the questions, the fundamental questions that result in Nobel Prizes being awarded to machines. Genius is the combination of intuition and knowledge. We are planning the seats for this to happen. Creativity is essential, since it is the ability to adapt. I do worry that such machines, as they come into existence, will be valuable. The more intelligent the machine is, the more valuable it is. Deep learning, as in AI technology, is more valuable because it generates and can do more than previous shallow learning technique. So we are seeing this trend. Watson is more valuable since it has a more general understanding that previous simple kinds of curiosity-based natural language processing. It’s becoming more and more unpredictable. We’ve got series of how to make it more intelligent, and I believe that we’ve got the right framework. One thing I want to point out to you is that creativity and imagination in the social context means that you can imagine what I might be thinking, and I might imagine your thoughts, feelings, and then we have empathy, and then we can really do something. If we can sympathize the world, it goes beyond just simply understanding each other, and also imagining a better future, help things build together that solves world’s problem. That’s what this whole event is about. It’s us all coming here together to solve these problems. Can we do this we artificial intelligence? I believe so. I believe that creativity and imagination are absolutely essential to make machines safe, to understand what we want, feel, desire and need, as well as do things better than we can. Super-human capabilities, that’s what we are after. We think this architecture we have proposed and embodied it into what we called the “dream time”, cloud AI, for controlling these multiple robots, this is at the core of a bunch of components of the system. Indeed, we are building a super-human “being”, not machine.

Q&A

Q: I think human beings have been existing for a long long time, how we are so sure that now is the time we are able to create a robotic being in a near foreseeable future? Will that be very risky when we rely on the nice part of robotic being and when we are so used to having them around us. What can we do if one day something went wrong, either the database get lost or else, then our whole society will collapse?

A: We are at a very unusual moment in history. This dream of creating artificial being has existed as long as human existed. In ancient myths, people try to develop synthetic people, like Leonardo da Vinci, and even in ancient China. Even the quest for computers began a long time ago, that is, to simulate human reason in mathematics. The foundation of so much technology today has actually been born in this quest, but only now do we start to see hope and possibility to achieve this dream within the next few years. Why is now the time for this? That’s a mystery, but it’s fun and amazing. However, it’s also possible that disaster may befall us. In the last 50 years, we have developed the capability of wiping all life on the planet through nuclear warfare. That is the first time in history that humans have the ability to wipe out the human species on earth, and we are also de-stablizing the eco-system in ways we don’t fully understand. We are still debating about it. But then you have these other technologies, they can these address these threats, and artificial intelligence could be one of them. There are very strong argument about why if we develop human artificial intelligence machines, most likely they won’t care about us. Even if they do care about us, they might not understand the impact of their actions, they might be thinking that what they are doing is good yet imprisoning us, destroying us inadvertently. For these reasons, most of these research you see on the screen here. Some companies are building machines that doesn’t look like human beings, and they will resort to machines that are alien. It’s like a dog raised among wolves. It doesn’t sympathize with us, so it’s not safe. Dogs actually co-evolve with human beings for 170,000 years us according to biologists, which means they’ve come to sympathize with us. By developing this cloud mind, and having it interact with millions of people, in one year, you would have millions years of evolution through these interfaces. That’s what we are looking at. By 2025, most likely, your computer, you phone will become as powerful as human brain in terms of computational abilities. Will there be any disadvantage? Yes, of course, such as job loss, and people might become “strange”. But these duel scenarios are real. Who knows what else? There for, we are pursuing computational compassion, wisdom, which means seeing the greatest benefit for the greatest number of people with this “dream time” AI. We think it’s the only way that this type of machine will be developed. For that reason, we found this non-profit initiative, called Initiative for Awakening Machines, and we make our technology open source, so that other people can contribute to this quest, but also question the implications: how can we make sure that this comes out safe.

 

[The article is published and edited with authorization from the author @Zhao Lei, please note source and hyperlink when reproduce.]

Translated and edited by Levin Feng (Senior Translator at ECHO)

转载请注明出处、作者和本文链接
声明:文章内容仅供参考、交流、学习、不构成投资建议。
想和千万钛媒体用户分享你的新奇观点和发现,点击这里投稿 。创业或融资寻求报道,点击这里

敬原创,有钛度,得赞赏

赞赏支持
发表评论
0 / 300

根据《网络安全法》实名制要求,请绑定手机号后发表评论

登录后输入评论内容

快报

更多

2024-04-23 23:04

马斯克的X公司推出电视app

2024-04-23 23:03

国内期货夜盘收盘跌多涨少,乙二醇跌超2%

2024-04-23 22:55

特斯拉裁员据悉将包括2688名得克萨斯州的员工

2024-04-23 22:54

小米汽车首次大版本OTA升级计划即将公布,明起推送部分功能优化

2024-04-23 22:47

我国自主研制新技术为核安全“上锁”

2024-04-23 22:45

深圳鼓励商品住房“以旧换新”

2024-04-23 22:35

苹果据悉正在使用台积电的3nm工艺开发自己的AI服务器处理器

2024-04-23 22:33

美国DNA癌症新抗原疫苗取得重大突破,11名患者肿瘤大幅缩小或消失

2024-04-23 22:15

调查显示:近70%香港小型企业预计今年业绩增长

2024-04-23 22:14

苹果夏季发布会将于5月7日举行

2024-04-23 22:13

广东省财政向暴雨受灾地区拨付9000万元救灾资金

2024-04-23 22:12

报告:41%企业领导者预计未来五年内将减少员工数量

2024-04-23 22:11

维珍银河大涨30%,公司宣布正考虑反向拆股

2024-04-23 22:08

首都银行(中国)2023年末总资产扩张到177.49亿元

2024-04-23 22:04

深交所对天齐锂业下发关注函,要求分析一季度亏损大幅增加的原因

2024-04-23 22:04

美国3月新屋销售总数年化69.3万户,超预期

2024-04-23 22:03

现货黄金反弹至2330美元/盎司上方

2024-04-23 22:03

报告:"快时尚更加经济实惠"是一种虚假观念

2024-04-23 22:00

安踏在上海启动首个碳中和鞋服店铺

2024-04-23 22:00

北京文旅局:全力协调热门景区科学增加假日门票弹性供给

扫描下载App