snopes.com  

Go Back   snopes.com > Non-UL Chat > Techno-Babble

Reply
 
Thread Tools Display Modes
  #1  
Old 09 May 2018, 01:19 PM
Hans Off's Avatar
Hans Off Hans Off is offline
 
Join Date: 14 May 2004
Location: West Sussex, UK
Posts: 4,600
Default Google just gave a stunning demo of Assistant making an actual phone call

http://https://www.theverge.com/2018...duplex-io-2018

I have been following a couple of fascinating discussions on Twitter about the implications of this.

Some views of note are that a moral and ethical line has been crossed with the new technology but no one I have read has managed to actually articulate what those might be.

My own current view is that morally and ethically, this is no different from any other extant AI technologies being used, such as those text based helpers on customer services sites that tend to begin with a bot response until the person on the other end goes far off script as to need an “actual human” to carry on the conversation.

The difference with the new tech is that it is indistinguishable from a real person in audio rather than text.

I see a general fear of the technology (it gives people the willies for sure) but like other forms of AI (live text or fomulaic emails) we will eventually get used to it.

Perhaps, as I have said on my twitter, when a WestWorld type host knocks on my door and tries to sell me a vacuum cleaner, I will “nope out” of the modern age and throw my technology in the bin.

So what are peoples thoughts on this? Do you feel that a moral or ethical line is being crossed here? If so what line is that exactly?

There is the “bad Actor” worry but the risk is no different from any existing phishing or cyber security scam, it’s just a different communication methodology.

Thoughts?

Last edited by Hans Off; 09 May 2018 at 01:19 PM. Reason: spoling
Reply With Quote
  #2  
Old 09 May 2018, 02:20 PM
thorny locust's Avatar
thorny locust thorny locust is online now
 
Join Date: 27 April 2007
Location: Upstate NY
Posts: 9,516
Default

I don't think the technology itself is immoral or unethical. I think that, in this society, it's very likely to be used for unethical purposes.

I'm already annoyed enough at being called up by bots. The more they're made to sound like humans, the more annoying they are, because it takes longer for me to figure out that I should just hang up. (No, I can't just not answer unknown numbers; they might be new customers.)

If they could, and could be made to, get this thing to recognize the phrase "do not call" and respond to it by taking the number off the call list forever, that might be useful. I'm sure scammers would just disable the feature, though.
Reply With Quote
  #3  
Old 09 May 2018, 02:29 PM
ganzfeld's Avatar
ganzfeld ganzfeld is offline
 
Join Date: 05 September 2005
Location: Kyoto, Japan
Posts: 23,625
Icon05

Quote:
Originally Posted by Hans Off View Post
Do you feel that a moral or ethical line is being crossed here?
I'm not sure I understand what moral or ethical lines you are talking about. Did the Assistant lie or do something morally or ethically questionable in the demo? I'd like a hint about what kind of morals or ethics you mean. I don't get what it has to do with phishing or scams. I mean, was it used to do that? (I guess I have to see the video to find out what the Assistant did that was questionable.)
Reply With Quote
  #4  
Old 09 May 2018, 02:43 PM
Hans Off's Avatar
Hans Off Hans Off is offline
 
Join Date: 14 May 2004
Location: West Sussex, UK
Posts: 4,600
Default

Quote:
Originally Posted by ganzfeld View Post
I'm not sure I understand what moral or ethical lines you are talking about.
Neither am I. Twitter’s response seems to be falling in two main camps, those that say that it is somehow crossing a moral or ethical line (without actually articulating what they might be) and there are those that respond with pictures of tinfoil hats and tell people to lighten up about new technology.

I seem to be part of a smaller (certainly less vocal) third minority that can’t seem to fathom why this would be unethical and fail to get a response other than “well it is!”

My intent with this thread was to see if anyone here shares similar views as to moral or ethical implications of human like audio AI and why that is morally different or somehow worse than an actual human scammer reading off a script.

Because I can’t really see it!
Reply With Quote
  #5  
Old 09 May 2018, 05:38 PM
Alarm's Avatar
Alarm Alarm is offline
 
Join Date: 26 May 2011
Location: Nepean, ON
Posts: 5,717
Cell Phone

Getting an AI to make calls can't be less ethical than getting a machine to send pre-recorded messages intended to confuse and disrupt the electoral process....

In other words, the problem is not the technology, it's the user (the problem is between the chair and the keyboard).

Reply With Quote
  #6  
Old 09 May 2018, 06:06 PM
musicgeek's Avatar
musicgeek musicgeek is offline
 
Join Date: 01 August 2005
Location: Fairfield, CT
Posts: 5,672
Default

Am the only one annoyed by the AI's uptalking?

(Wikipedia's more scholarly definition here.)
Reply With Quote
  #7  
Old 09 May 2018, 10:43 PM
jimmy101_again jimmy101_again is offline
 
Join Date: 29 December 2005
Location: Greenwood, IN
Posts: 6,910
Default

Quote:
Originally Posted by ganzfeld View Post
I'm not sure I understand what moral or ethical lines you are talking about.
One ethical line would be if the AI is good enough that the call receiver believes they are talking to a human being.

Many robot calling systems already exist where it is hard to tell you are talking to a machine. Those are also unethical.
Reply With Quote
  #8  
Old 09 May 2018, 11:12 PM
Little Pink Pill's Avatar
Little Pink Pill Little Pink Pill is offline
 
Join Date: 03 September 2005
Location: California
Posts: 7,055
Default

Quote:
Originally Posted by musicgeek View Post
Am the only one annoyed by the AI's uptalking?
It didn’t bother me at all, especially because it was less pronounced than the salon employee’s was. But I’m in So Cal, where it’s very common and denotes social awareness to me, rather than uncertainty.

But gender and age issues could be an issue in using AI for phone calls. Having a male assistant with a British accent might get you a better table at a restaurant than a female America voice using upspeak, for example.
Reply With Quote
  #9  
Old 10 May 2018, 12:11 AM
ganzfeld's Avatar
ganzfeld ganzfeld is offline
 
Join Date: 05 September 2005
Location: Kyoto, Japan
Posts: 23,625
Default

Quote:
Originally Posted by jimmy101_again View Post
One ethical line would be if the AI is good enough that the call receiver believes they are talking to a human being.

Many robot calling systems already exist where it is hard to tell you are talking to a machine. Those are also unethical.
What is the ethical principal that they violate?

I would kind of get the other way. If a human were hiding in the Zoltar machine that could be creepy. Still don't really see any ethical obligation by that alone ...

(I'm not trying to be argumentative. I really want to know.)
Reply With Quote
  #10  
Old 10 May 2018, 02:30 PM
Hans Off's Avatar
Hans Off Hans Off is offline
 
Join Date: 14 May 2004
Location: West Sussex, UK
Posts: 4,600
Default

Quote:
Originally Posted by ganzfeld View Post
What is the ethical principal that they violate?

I would kind of get the other way. If a human were hiding in the Zoltar machine that could be creepy. Still don't really see any ethical obligation by that alone ...

(I'm not trying to be argumentative. I really want to know.)
That is pretty much my question. lots of people saying “unethical!” or “Morally questionable!” but not which ones.

Someone has said to me it is worse because it can scam more people at once they say and I quote:
Quote:
Originally Posted by Man on Twitter
“Is there no consideration for scale in your morality? Murdering someone with a firearm and killing a million or two with a nuclear weapon are morally equivalent? Neither is more reprehensible than the other, or demanding a different response?”
I pointed out that is was a bad analogy.

He has now gone on to say punching someone dead is somehow not as immoral than shooting someone.

I think I am dealing with an idiot.
Reply With Quote
  #11  
Old 10 May 2018, 02:48 PM
ChasFink's Avatar
ChasFink ChasFink is offline
 
Join Date: 09 December 2015
Location: Mineola, NY
Posts: 893
Borg

I'm not sure it's an ethical problem, but everyone needs to acknowledge that an AI operates differently from a human brain, and its mistakes can be very strange: recall Watson's Final Jeopardy flub when it answered "Toronto" in a category about U.S. Cities.

If the computer, for example, thought it was making an appointment for a tooth filling April 10 at 4:30 PM, but the human at the other end actually made an appointment for a tooth cleaning October 4 at 3:30 (extreme example, but you get it) then the human who wanted the appointment is out of luck getting the work done, or in the opposite case liable for the no-show.

"I'm sorry, my robot made that call."
"Sounded like a human to me, but it's still your problem."

Having the AI identify itself as such might at least keep a heads-up in these cases.

I really wonder what kind of errors could happen if AIs were on both sides of the call - not to mention the waste of bandwidth needed for the voice interfaces.
Reply With Quote
  #12  
Old 10 May 2018, 02:59 PM
Hans Off's Avatar
Hans Off Hans Off is offline
 
Join Date: 14 May 2004
Location: West Sussex, UK
Posts: 4,600
Default

The capacity for error is not a new phenomenon though. The owner of the AI device would be liable for any mistakes that it may make in exactly the same way that an employer would be liable for the mistakes made by an employee, the difference being that you cannot reprimand your AI!

(well you could but what would be the point? In these cases the AI is more likely to learn a positive lesson from the mistakes being made)
Reply With Quote
  #13  
Old 10 May 2018, 03:00 PM
ganzfeld's Avatar
ganzfeld ganzfeld is offline
 
Join Date: 05 September 2005
Location: Kyoto, Japan
Posts: 23,625
Default

That is an interesting point Chas. It's also interesting that predictability doesn't come up, for example in Asimov's laws, even though it seems so central to people feeling OK dealing with people and other autonomous agents.
Reply With Quote
  #14  
Old 10 May 2018, 03:24 PM
Richard W's Avatar
Richard W Richard W is offline
 
Join Date: 19 February 2000
Location: High Wycombe, UK
Posts: 26,292
Default

I did wonder how the robot communicated the results of its call back to its "owner" afterwards. (If the owner has to listen to the whole call, it negates the point; reading a transcript might be slightly more efficient but still relies on the AI interpreting the words of the human correctly, so might not show up a problem.) Also, how much effort went into telling the robot what counted as a success.

In the top two examples (I've not listened to others yet) the transaction seemed quite straightforward in human terms and it seemed to be successful. But there's still information to pass back to the "owner", and stuff that's far from straightforward to automate.

In the first case, the original appointment time that the robot asked for wasn't available. The receptionist suggested a later time, but apparently the robot had been told to ask for an earlier appointment instead, and did. The receptionist had to ask what exactly the appointment would be for - which the robot knew - and then, fortunately, it turned out that there was a slot available for that service between those times. So the robot only has to record the appointment in a diary system at the time it was actually made, which is simple. It didn't need to confirm anything with its owner.

In the second case, I'm not even sure that the human and the robot are talking about the same thing at times - they're talking past each other. (It illustrates that many conversations are pointless because people only listen to their own parts!) The human manages to take enough information to make the booking - I think. The robot repeats all the things it's been told to ask about. But there's a bit of information about whether there will be a wait for the table at the time they have booked which wouldn't necessarily be easy to convey. In this case, again, it could be reduced to a binary "no" which could be put in a note on a summary in the diary entry for the booking, but if it had been more complicated I wonder how it would work? A transcript of the whole answer to the question as a note in the diary appointment? What if the decision about the booking actually depended on that information? Again, the robot would have to be told that in some way and call back to its owner to make the call again.

I know it's a very early stage, and they've not claimed to have solved those bits yet, but it seems to me that it would still be a lot more work to set up a robot to make even a basic booking, and then reliably find out whether it had managed to do so in an acceptable way, than it would be to get your secretary to do it. (Or do it yourself, since this will eventually be aimed at the general public who don't have secretaries!)

I'm sure people who employ even human PAs or secretaries would say that it takes time to be able to rely on them to interpret your needs correctly and make decisions without having to check with you all the time...
Reply With Quote
  #15  
Old 10 May 2018, 03:50 PM
GenYus234's Avatar
GenYus234 GenYus234 is offline
 
Join Date: 02 August 2005
Location: Mesa, AZ
Posts: 26,269
Default

That brings up one thing that might be an ethical consideration. If the AI making the call is not sophisticated enough, it adds to the workload of the person answering the phone by making them try to guess the correct responses to the AI. If not designed well, it could be like calling a voice-response computer system that doesn't tell you the range of possible answers. An easy out to this would be to allow the callee to request a human like many systems allow you to get a human by pressing 0 or saying "operator".
Reply With Quote
  #16  
Old 10 May 2018, 04:25 PM
Richard W's Avatar
Richard W Richard W is offline
 
Join Date: 19 February 2000
Location: High Wycombe, UK
Posts: 26,292
Default

Even on the basic level of the robot needing to get a human to call back, that's still potentially doubling the number of calls that the human operator needs to take (assuming the worst case in which the robot fails every time). The cases where the robot can't deal with its task become extra, time consuming calls.

And it would be worse for the operator if the human calling back tried to refer to the robot call and expected the operator to remember it!
Reply With Quote
  #17  
Old 10 May 2018, 04:25 PM
jimmy101_again jimmy101_again is offline
 
Join Date: 29 December 2005
Location: Greenwood, IN
Posts: 6,910
Default

Is a verbal contract between a person and an AI legally binding? In general, verbal contracts between two people are legally binding.
Reply With Quote
  #18  
Old 10 May 2018, 04:33 PM
GenYus234's Avatar
GenYus234 GenYus234 is offline
 
Join Date: 02 August 2005
Location: Mesa, AZ
Posts: 26,269
Default

I would assume the AI would be considered the legal agent of the person employing it.

ETA: I don't know if it has ever been tested in law, but there are third-party programs that will snipe bid on eBay for you. And I think many (most) stock purchases are done by computer decisions rather than an actual human.

Last edited by GenYus234; 10 May 2018 at 04:43 PM.
Reply With Quote
  #19  
Old 10 May 2018, 04:52 PM
jimmy101_again jimmy101_again is offline
 
Join Date: 29 December 2005
Location: Greenwood, IN
Posts: 6,910
Default

Quote:
Originally Posted by GenYus234 View Post
I would assume the AI would be considered the legal agent of the person employing it.

ETA: I don't know if it has ever been tested in law, but there are third-party programs that will snipe bid on eBay for you. And I think many (most) stock purchases are done by computer decisions rather than an actual human.
So if the AI makes a mistake the person employing it would still be bound by the contract?
Reply With Quote
  #20  
Old 10 May 2018, 05:42 PM
GenYus234's Avatar
GenYus234 GenYus234 is offline
 
Join Date: 02 August 2005
Location: Mesa, AZ
Posts: 26,269
Default

IANAL, but I believe that would be the case if the mistake was in the scope of the AI's job. In current law, a principle is liable for the acts of their agent that are within that agent's scope of employment. For example, if you hired a bidding agent to bid for a classic Mustang at a car auction and that agent assaulted another bidder, you would probably not be liable as that is beyond what the agent was hired for. But you might be still required to pay for the Mustang even if the agent bid beyond what you had set as the max bid.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is On

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
PICTURES: See stunning Stockholm from the sky Floater SLC 0 13 September 2013 09:44 AM
Urine-powered mobile phone charger lets you spend a penny to make a call A Turtle Named Mack Techno-Babble 2 17 July 2013 04:12 PM
Help needed making a smart phone comparison Morrigan Techno-Babble 9 04 February 2013 10:15 AM
Phone scams up despite do-not-call registry snopes Crime 0 20 August 2009 05:29 AM
Best man: Assistant Bridenapper? Lisa Old Wives' Tales 5 27 August 2007 01:46 AM


All times are GMT. The time now is 02:51 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2018, vBulletin Solutions, Inc.