Designer Babies & AI, with Neil deGrasse Tyson

Designer Babies & AI, with Neil deGrasse Tyson

September 11, 2019 100 By Bernardo Ryan


– Hey YouTubeiverse, coming
right up, a cosmic queries edition of StarTalk, bioethics. (upbeat music) Welcome to StarTalk, I’m your
host Neil deGrasse Tyson, your personal astrophyscist. We’ve got a cosmic queries
edition of StarTalk. The subject, bioethics. My co-host Paul Mecurio. Paul! – Nice to see you again. – Welcome back dude. – Yeah, thanks for having me back. – Thanks for making some time for us before warming up Steven
Colbert’s audience. – You’re right down the street. – Yeah, which is up the
street from here yeah. – Yeah and you provide
a limo which is nice. (laughing) – Did we? – [Paul] No. – Okay. Now, you’re not a bioethicist. – No. – Neither am I. Even though we might have
thoughts on the matter. Right? – Every day I’m constantly, (laughing) I wake up and go what
is going on ethically? – Bioethically. So we went back into our Rolodex and reinvited Professor Matthew Liao. Matthew, welcome back to StarTalk. – Thank you. – So we last had you on stage live in front of an audience
at New York Comic-Con. And we had Adam Savage with us as well and we were talking
about human augmentation and whether that would be bioethical. And you said off camera, you remember, that that was my birthday? – We sung, right? – [Neil] Oh I try to forget those things. – Yes I think 3,000 people sung to you. – [Neil] They did, they all
sang, they all did sing. – It was written on a program
that that was required. – So welcome back and good to have you. You are Director of the Bioethics program. – The Center for Bioethics at NYU. – [Neil] At NYU, Center for Bioethics and New York University
right here in town. So easy date for you. So we’ll be calling more on you. – Oh great. – As we think of these issues. So we’ve got questions, Paul. – [Paul] Yes. – I haven’t seen them, I don’t
know if Matthew’s seen them. – No he has not and we’ll
just sort of jump right in. Well let me just find
out, what is bioethics? What’s an example? Just, so we’re on the same page. – Yeah, it’s the study of biomedic issues arising out of biomedical technologies. – [Neil] Mostly medical now. – Yep, mostly medical
but it could also involve things like artificial intelligence and sort of it’s connection to healthcare. – Yeah but AI’s not bio on purpose. – Right, but it could be. – [Neil] So what you
want is silicon ethics. – Yeah, silicon ethics, that’s right. Well a lot of people are now thinking about putting in things like
brain computer interfaces into their brains and things like that, so the silicon and the organic matter, they’re kind of merging right now. – So this complicates your job. – [Matthew] That’s right. – Or makes it more interesting, both. – What’s the fastest moving area? Is it AI? Is it genetic manipulation? – Yeah I think both of them
are occurring concurrently so, there’s the CRISPR technology, genetic technology
that’s really advancing. – I like that, because if
you can mutate my genes so I don’t have to go to the gym, then I’m a happier guy. – Is that how that works? (laughing) – That’s exactly how it
works, you can do it today. (group laughing) And then there’s the
artificial intelligence. People are using that
for things like cancer, you know, there are pathologists
looking at these images. The AI’s getting really
good at pattern recognition and image recognition. They can spot cancer cells almost as good as pathologists now. – So but that wouldn’t be an ethical thing that’s just, the machine can do it better so let the machine do it. – Right. – Right, so ethics would be now the machine knows your condition and it’s connected to the internet, and so a hacker might have access. – Yeah, or say that you
know the insurance company knows the algorithm and tries to hack it and so to make it look like
it’s not cancer when it is, or something like that. Or issues to do with privacy. – Wow he’s paid to think about this stuff. – You have a very diabolical mind. (group laughing) – You know, come up with a way
that we can foil this system. – When you’re out for dinner, and the waitress said, would
you like to have dessert, you’re like, what do you mean by that? (laughing) Are you fun around people? I mean you’re fun, but like
if they wanted to do something a little inappropriate, like put a little extra
gas in when nobody notices. You go like, no there’s
an ethical issue there? – [Neil] How ethical are
you, is the question there. – Well there are surveys that say that ethicists aren’t necessarily more ethical. So apparently they steal
books from libraries, and they don’t call their
mothers on Mother’s Day, and things like that. I call my mom on Mother’s Day. – Okay, so it’s do as you
say and not as you do? – That’s right. – [Neil] All right, so what
questions you have Paul? – We’re gonna start with
the Patreon question. – [Neil] Patreon, let’s do it. Give some to the Patreon. – Yeah, absolutely. We love them. This is Oliver Gigas. I’m sorry if I’m mispronouncing that. “Personally I feel that
we the general public aren’t talking enough about
subjects like bioethics and AI. Even though they are clearly going to be a huge part of the future. Do either of you feel
the same way and if so, how can we better educate
ourselves on these subjects?” – I completely agree. And so one of the things I try to do is to talk to the public
about some of these issues and the work in this area,
things like gene editing, and artificial intelligence. – How much of it is just fear that people don’t
understand the technology? And so we fear everything
we don’t understand. Doesn’t it come down
to that at some level? – Yeah, I think a lot of it is that. Just people are scared
of new technologies, they’re very cautious. – You have great science fiction writers that take it to the worst dystopic future. – That’s right, you know
the robots are after us, they’re gonna kill us,
superintelligence is coming, and so people get really scared and they think oh we should
not do any of this stuff, and that’s also bad for
science, it’s bad for progress. – Yeah but I just bought a car where I don’t have a dipstick anymore and I just hit a button
and it tells me the oil, the oil stats. – [Neil] Really? – Yeah. And I’m a little weirded out by that. Like I want the physical thing. – Get off my lawn! I’m not an old man yet! – But I don’t trust them. – Young whippersnapper! – What if the oil companies have adjusted the program of that so that it’s falsely
telling me I need oil, to make extra money? – Yeah. – We should hang out,
you see what I’m saying? – Yeah, you sound like
a bioethicist already. (laughing) – Not to hang you out to dry. When the dashboard became all screen, without a mechanical speedometer, where it just, it turns on. And in what turns on, it has your mileage. And I’m thinking, this
is a screen, come on now. There’s no mechanical miles (mumbling), and I got all Old Man on it. I said, give me back my dial! (laughing) – I’m with you, I unplug
my toaster everything night because I think it’s gonna catch fire. I don’t know. The whole thing is sort of overwhelming for people on some level. – Yeah, so I think you hit
on exactly the right issue. And the issue is trust. Like trust in technology,
trust in algorithms, trust in how do we make that when we roll out these
technologies there’s trust? And that’s the job of the scientists but also the ethicists and– – [Neil] And the educator. – Yeah, and the educator. To make sure that we can
actually trust these things. – So here’s a question that I remembered getting asked of the public. And I remembered at the time what my answer was then
and it still is today. But the public in the
day answered differently. Here is the question. If something happens, you’re on an airplane, and something goes
wrong with the airplane. What would you trust? A button that says “Auto
fly this thing home”, or a trained navy pilot, a decorated navy pilot to bring it home? And everybody says, the pilot of course! And I’m thinking, no
give me the autopilot. Push the auto button. – What if he just had
a fight with his wife and just downed a bottle
of scotch at the airport? – That’s what I’m saying. The button didn’t have a
bottle of scotch, guaranteed. And today, I mean what
my thinking has born out, because planes are designed that they cannot actually
be flown by a human being, there’s too many surfaces that are under control of the computer, that’s why flying is so stable now. So do you trust the technology or not? – [Matthew] That’s right, yeah and so in order to trust the technology, you have to make sure that it’s safe, it’s tested, it’s reliable, it can’t be adversarially attacked, and that’s why ethicists like myself we ask these questions, things like, well what happens? You know, we imagine these
hypothetical examples, like what happens if
the insurance companies try to cheat you and that sort of thing, or if the hackers try to
hack into the algorithm, or the imaging thing, there’s plenty of evidence that some of these machine learning
technologies can be hacked. – Well the thing that’s
amazing to me is science, and especially what you do,
is so on track with ethics. It’s a microcosm because
in society in general, ethics seems to be the last thing. It’s like worrying about table manners at a Game of Throne Red Wedding, right? You guys have this ability to really think about these things. Like there’s this conversation about well AI can destroy the planet. Well humans are already
kind of doing that. Is that any worse? – Yeah, maybe AI can do it better. (group laughing) – More efficiently. – [Neil] Exactly, none
of this dallying about. – Less complaining. – Well some people think
that superintelligence, if they were to be created, they’re gonna decide that, “hey
we’re destroying the planet, and one way to stop, to help the planet, is by killing all of us.” – Because we’re a virus. – [Matthew] They do it
because we’re viruses, yeah. – That’s the word my wife uses for me. – That’s a line from
the Matrix, the first. All right so Paul, you’ve
got more questions? – [Paul] I do. Raymond Oyung, StarTalkRadio.net, question about morals and science. Are there any circumstances in science where it would be acceptable to bypass ethics in human experimentation if the findings would
lead to greater good? – Ooh good one. Wasn’t that the entire
Nazi medical enterprise? And the Tuskegee study. – That’s the Tuskegee study as well. – Yeah just tell us about
one or both of those. That’s a great question here. – Yeah so the Nazis were
experimenting on humans. For example, they’re taking
them up into the airplanes to see how much pressure a
human being can withstand. – These were mostly Jews
and other undesirables in the Germanic model of humanity? – [Matthew] That’s right. And they apparently some
people say that they were able to find out things that we
wouldn’t have otherwise found but still, I think that
it’s very clear now that we need to abide
by these ethical norms and we need to stick to research ethics. And there’s something
called the Belmont Report that came out as a result
of the Tuskegee experiments. – Just describe Tuskegee. – Yeah, it’s the experiment where there were these subjects and they
were given syphilis and they weren’t told that. – But I thought they already had syphilis. – They already had syphilis. – But they were told
they were being treated, but in fact they weren’t. And then the observation was to see the progress of syphilis
in the human body. And all of the subjects were black men. – [Matthew] That’s right. After that, when it was
discovered, basically, that was the birth of
bioethics as a field. People decided that we
shouldn’t be doing this. There are different principles
that were being proposed, things like, “do no harm”, you need to make sure the
research benefits the subject, and then you need to make
sure that there’s autonomy, there’s informed consent, so a lot of the bioethical
principles came out as a result. (talking over each other) – Well yeah “do no harm”, that’s in there. – That’s part of the Hippocratic Oath. Yeah but, talk to Mickey
Rourke’s surgeons, I mean, they violated that
thing eight ways to Sunday. I mean isn’t that sort of part of the– The medical field to me seems like, was it fair to say the
first area where bioethics was sort of really founded in some way? – Yeah. – And yet it seems like that profession, they’re all over the place. I mean there’s pimple popper shows on TLC. – Well I think maybe they’re
intent is to not do harm, even if they end up doing harm. Right? Like plastic surgery. It can go wrong, but
it wasn’t their intent. – It’s like me with a bad joke. (group laughing) – You did harm. – A lot of harm. That set did a lot of harm, and I can’t bring it back. – [Neil] Okay, so what you’re saying is, this is an interesting,
enlightened posture which is, no matter what is going on,
I will do no harm to you, even if having done harm to you, saves the lives of a hundred other people. Because the individual, has
the priority in this exchange, in this relationship. So that’s enlightened and
even profound, I think. – Is the converse of this,
what Neil just mentioned, this whole issue with measles now, because I’m really fascinated by that. So someone is morally
against a vaccination because they think it causes autism, and yet they’re putting
entire communities at risk, what is the conversation in
your field now about that? And what do you serve at a measles party? Salmonella cake? I’m just curious. – So my own view about vaccination is that we have a public
duty to be vaccinated, and so that comes from
not harming other people, so we have an obligation
not to harm other people. And so, the issue with vaccination is that, it’s the bodily integrity. We also have the right
to bodily integrity, so some people think that
we shouldn’t be forced to be vaccinated if we don’t want to. And I think that’s right, but I also think that, that doesn’t mean that we ourselves don’t have a duty to be vaccinated, so we should do it voluntarily. – So there’s a greater good. – That’s right. It’s a greater good argument. – That overrides the personal integrity. – Well personal integrity
is something you can wave, it’s your right, but
you can wave it, right? In these cases. So in this case I think
that we have a duty to serve the public by getting vaccinated. – You kinda straddled
the fence there a little. Yeah, you don’t wanna create a law. – You should run for president man. – [Neil] (laughs) That was good. – You did not answer that question. You know what? That was an unethical answer. – It’s interesting,
it’s really complicated. – And they actually dealt
with a little bit of this in “Planet of the Apes.” Because you have the intelligent chimps, and they’re doing medical experiments on the humans that they captured, and we think that’s an abomination because we’re human,
but of course we do that on lab animals all the time! So who are we to say
that they can’t do that? – And yet, the quality of
our life is much better because we do it, so it’s sort
of this whole balancing act. – That’s why we have you. – Yeah. – Yeah, okay (laughs). – Not to do experiments on you right? That’s next week. You come back and there’s a
dungeon and we take you there. – Wait, I can’t let this go. So there’s not even
some numerical threshold where you would say, harm to one person if it saves
a hundred, or a thousand, or a million, or a billion? – [Matthew] Yeah, so there’s
this view it’s called threshold deontology. Threshold deontology. – Deontology. – That’s right. And it’s the view that,
there’s a threshold, and when you cross that threshold then it might be okay to
harm somebody in order– – But isn’t it arbitrary? Who decides what the threshold is? – [Neil] That’s why we have him. (group laughing) – You’re making all of these decisions? – [Neil] He’s the ethicist. – You? I’m leaving (laughs). – You’re sitting next to an ethicist. Who makes these decisions? He makes the decisions. He and his people. – He and his people. He has a team. – Yeah, so you’re absolutely right. So where is the threshold? It’s not okay to say, kill
one to save five people. Like is it okay to kill one
to save a million people? – [Neil] Right. What is the threshold? – Or a billion people. What’s the threshold? – But if one to five is okay, – Is not okay. – Okay, but then you’re
saying, no joke here, Neil is one of the five, but then there’s a million
and you’re saying it’s okay, you’ve devalued his life based on the number of people in the group. That doesn’t seem to be any logic to that. – Yeah, so some people say that, well if we were to think
that it’s okay to kill Neil in order to save a billion people– – [Neil] How did I get
in the middle of this? (group laughing) – [Paul] Well you’re very
smart, extremely intelligent, so you’re worth a billion people. I’m worth like a dog. I’m the equivalent of a dog. – It’s the rowboat thing,
you throw out Abe Lincoln, do you keep the criminal,
like what do you do? – And by the way, how would we kill Neil? Just out of curiosity. Would it be a slow death? – I’m an ethicist not a– – The most ethical way to kill me. – Watching Lifetime
channel the whole time? – Painless, painless way of doing it. – [Neil] So tell me it’s called the. – Threshold deontology. – [Neil] Threshold deontology. – Yeah and so that’s the
view that there’s a threshold beyond which it’s okay to harm somebody in order to save a greater number. – So, towards the end of the movie “The Secret of Santa Vittoria”, I don’t know if it’s fiction, or if it’s based on a real story. There’s a town in Italy, or
it might have been France, this amazing wine-producing, their family, world famous for their wine, and the Nazi’s were coming through, and they didn’t want the Nazi’s to get it, so they hid the wine in a cave, and bricked it over,
and then put moss on it, and made it look aged, and
then the Nazi’s came in looking for the wine, and
they couldn’t find it. And they scoured the countryside,
and they decided that whoever’s the next person
that comes out on the street, they’re gonna torture them and find out where the wine is hidden. So the townspeople agreed
to let the prisoner, out of the, they said,
“you’re free to go.” And because the prisoner
didn’t know any of this, the prisoner was just a thing. The prisoner comes out,
the Nazi’s torture him, and they couldn’t figure out where it was and the Nazi’s leave. – It would have been hilarious
of the guy they tortured was a Sombiere and they just killed court. Come on man. I just go my degree, really? – What are you doing in jail though? All right well we gotta take a quick break and when we come back, more on bioethics. Really cool. When StarTalk continues. – We’re back on StarTalk. We’ve been talking about bioethics. My guest co-host this
episode Paul Mecurio. Paul. You tweet Paul? – Um-hmm. – [Neil] Give me your Twitter handle. – @paulmecurio. – [Neil] At? Okay, very creative. (group laughing) – Well, I had my people,
we gathered around, we had a long meeting. By the way it’s M-E-C-U-R-I-O
and I only say that because there’s an Australian actor Paul Mercurio, M-E-R-C-U-R-I-O. – [Neil] Mercurio. – Which is actually how I spell my name but he got in the actors
union before I did, he was in “Strictly
Ballroom” and “Exit to Eden.” So I did my first guest
appearance on a sitcom and my manager calls in, “You
have to change your name.” I’m like, “Why, did I
bust a law or something?” He’s like, “No there’s this guy.” So it’s M-E-C-U-R-I-O. And in retrospect I should
have just it to like Smith, because it would have been a lot easier. – Mecurio’s cool. Reminds me of the planet Mercury. – [Paul] Oh there you go. – And we have Professor Matthew Liao. Welcome and you’re head of the Bioethics Center at New York University and we’re reading questions. We’ve got questions from our fan base. – We have another question for Matty. I’m gonna call you Matty the
rest of the show (laughs). This is heyhider from Instagram, “Do you think CRISPR’s Technology will “allow us to take the DNA of an athlete, “or maybe a bounty hunter, “tweak it to be even better
and stronger than the original, “and then take the DNA
and create a clone army? “Can we do that?” And if so, please send the
instructions to my bunker? No I just added that one. (group laughing) – Okay, cool. So what’s up with that? – So yes I think that’s possible. The fact that some people
are stronger than others is partly genetics, right? And so if we can figure out the genomics. – But don’t say that
because now Paul will say, “I’m not getting muscles
because it’s genetic. (group laughing) “so therefore there’s no
point in going to the gym.” – Twinkies have nothing to do with it. – I did say partly. – [Neil] Partly partly. – Yeah and so LeBron James, because of his genes, and
so if you can sequence. – Well he’s big because of his genes but is he athletic because of his genes? – [Matthew] Yeah, he needs to work out. So there’s definitely the nurture part. – You get your height and other things, very bluntly, from your genes. – [Matthew] That’s right. And so we can figure that part out, and then you can imagine using
CRISPR technology to then put that into either gametes,
or embryos, and then create offsprings that have those traits. – So this is in our future? – I think so. I think this is something
that can be done. – So we will breed into
our own civilization, entire classes of people,
for our own entertainment. Is that anything different
from sumo wrestlers in Japan? – It’s called the one and done
rule in college basketball. Isn’t that what we’re doing basically? – Yeah. – [Neil] Well tell me
about sumo wrestlers, no it’s not a genetic thing but, they’re specially
treated and specially fed to be sumo wrestlers. And that’s a cultural thing. They don’t live long and
everybody knows this. I don’t think they reach age forty. So is that really any different
from doing that genetically? – So people talk about designer babies and the ethics of designer babies. So there’s the question
of whether we can do it, but then there’s also whether
we should be doing this, and I think– – That’s very “Jurassic Park” of you. (group laughing) – And I think Neil asked
a really good question. – Do you have an evil lair? – Which is that, we’re
already doing a lot of this, this hyper-parenting. Look at Serena Williams
and Venus Williams. – Yeah but that’s different than manipulating through
CRISPR, manipulating a– – But the result is the
same, it’s not different. – [Matthew] Yeah, so the question is, “What’s the difference?” – One’s psychological, and
the other’s through genetic– – Yeah, so the means are
different, that’s definitely right, but why does that make
a normative difference? Why is it ethically different when we do it at the genetic level, as opposed to after the child is born? – So maybe it because you might have genetically bred me this way, but I
can choose to not do this. – But can you? Shouldn’t you have bred
him in a way not to fight who he is and what he is? – [Neil] Yeah, but maybe I’ll
say I’d rather just be a poet and then you can’t stop me, whereas otherwise, if you’re raising me this other way, then
there’s all this conflict, you know, go to the gym, eat
(mumbles) squares, whatever, or it’s stay at the piano,
and it’s conflict at home, whereas you can have a genetic propensity, but then just decline the option. – Boring house though. I’d rather be like,” I don’t have a mom!” And then slam the door. – Yeah, well the problem
is what if you also genetically modified the motivations so that the child wants
to be a super athlete, or super pianist? – Could you make me wanna
be Neil deGrasse Tyson? (group laughing) – [Matthew] Maybe. – I just wanna be able to talk like this. (group laughing) – Oh yeah. So the answer’s yes, if possible, and it could happen, but we
need more of you, the ethicists, around at that time, to
either say no or yes to it. – Right. – [Neil] Good, okay. – Launchpadcat, Instagram,
“Is there any such committees that regulates new technology
such as genetic tech, or AI, and puts regulators in
place pre-emptively, to prevent it from being
used for amoral things like eugenics or something of that sort? – So the U.S. has a– – [Neil] Just remind
people what eugenics is. – Eugenesis, yeah. – No, remind people what eugenics is. – Oh, eugenics is this
idea, it means well born, and so you know the Nazi’s
were trying to breed some race or some class
of people, thinking that some genes are better than others. And there’s a– – But even at a time
when the concept of gene was not really, they just
knew if you breed two people who are desirable, presumably
you’ll get a desirable person, and then you prevent
others who are undesirable from breeding and then
you can systematically shift the balance in the population to be a demographic who you want and care about. So the Aryan ideal was
then what was sought. – Isn’t that happening in a
way with breeding dogs and breeding purebreds and
sort of this inbreeding? – And plants and– – So Irish Setter’s
are out of their minds, because they’ve been bred so much. We have a dog that we adopted, it’s like a mut and she’s– – [Neil] Totally chill? – Totally chill. But breeding– – We’ve been doing it
with plants and animals and other animals– – But is it really gonna
be a board that’s gonna oversee this pre-emptively? I mean I said this before, but look at the medical profession. There are a lot of questionable
things that are going on in the medical profession, and there’s a board that
oversees that, pre-emptively. – The boards have ethicists on it. – That’s right. There’s different research committees, they have oversight, sort of IRB’s, they are Institutional Research Boards. – [Neil] Institutional Research Boards. – Yeah and then they have
ethicists on those boards to look research, look over the experiments, to make sure that they’re ethical. The problem is that, with
these IRB’s it’s sort of– – Wait. There’s something like that,
we’re not allowed to, we, the scientific community,
the rules about what animals you can do laboratory tests on. – Really? – Right, like chimpanzees, there’s certain things you
can’t do or that you can, and depending on someone’s judgment, some panels judgment, as to
the value of that animal, to ecosphere, or to whatever,
and other than PETA, if you’re doing it to a rat,
I don’t think anyone cares. – [Paul] I was gonna say, the rat like that poor thing’s
get slammed every time. – So do you think that it can
be effective going forward? It’s only effective if the
researchers are responsible. – That’s right. – Okay. – Yeah and also the value of the research has to justify whatever
research that you’re doing. So you can’t just torture
these rats for fun. – You can’t? – You cannot. – Oh, Jesus. – Right, that’s very unethical. And so– – Wish you had told me
that a couple weeks ago. – You know so, in order to
do research even on animals, even on rats and mice, you
have to be able to justify it to an Institutional Research Board. You have to say why is this necessary? And there’s no other way. You have to show that
there’s no other way, that this is the less
harmful way of doing it. The least harmful way of doing it. – And it’s not the rat’s fault it doesn’t have hair on it’s
tail, though a squirrel does. (group laughing) – You look adorable. It’s not it’s fault it’s
got that really pointy nose. – It’s not its fault. – It’s just hanging out. It’s a rat. – It’s not it’s fault
it eats your garbage. – Well now it’s my fault. I’m sorry I put my
garbage out on the street. – The squirrel eats nuts
and rat eats your garbage and you don’t like it. – Right, pigeon? Rats with wings? Do we do experiments on those? We should. Look into that. (group laughing) All right we have another one. – Yeah, I guess, keep going. – Scoshashofrandon Instagram, “If in the future our
noble intentions lead to “the practice of
genetically editing fetuses “for preventing birth
defects and future diseases, “how do we avoid the pitfall
of creating designer babies, “and the possible repercussions,
genetic inequality, “caste systems, etc., “and would it even be a pitfall at all?” – Yes, so that’s right. Would it even be a pitfall at all? Maybe this is something we
should think about doing, maybe there are good reasons to do it, for example– – To do what? – To genetically have designer babies, to engage in genetic editing, so this where we were
talking about earlier that people, when they
think about new technologies they get very scared,
but maybe there good uses of these technologies. So just for example, if we wanna go to the moon, or go to space, we wanna
make sure that we’re more radiation resistant and so
maybe there’s so sort of genetic basis where we can
be more radiation resistant, and so that’s something
that we should look in to, if we want to sort of– – So that means you’d breed
people for certain jobs? – Yeah. – But this idea of
creating the perfect human, I mean, I don’t even know
if anybody wants that, I mean everybody hates Tom Brady. (group laughing) And that’s about as perfect
as you’re gonna get. And I’m a Patriot’s fan
saying that by the way. – [Neil] Here’s where I would take that. I would say, isn’t so much of what we are, what we’ve been through to
overcome what we’re not, so that if you come out perfect, then where does your
character get developed? Where is your sense of– – Because you’re interacting
in an imperfect world, right? So your perfection is always challenged. – Well I’m just saying, who you are, is almost always what you
have overcome in life. If you’re perfect, there’s
nothing for you to overcome, what do you got to show for anything? – So you’re saying it’s
not achievable to create a perfect person. – No you can create a perfect person, but they will achieve nothing. (group laughing) That’s what I’m saying. The real achievers stuff happened to them. – Hey, doc. I was supposed to be perfect, but I’m not. What’s going on here? – Look at the real achievers in life. They’ve overcome
something, a broken family, there’s a thing, they have
a lisp, they’ve got a this, they have a limp. – A therapist gave me a list of people who were rejected like Edison, Bell, no one’s gonna wanna
talk to each other far apart through a box and they rejected him and
rejected him, and overcame. – [Neil] Right, that’s what I’m saying. So, if you’re perfect, you
might be of no use to anyone. – Right, yeah. So I think there are
two things to say there. So one is that human goals
will change the better you get so kids, when they’re five years old they like to play Go Fish,
but now they’re ten years old, they don’t play Go Fish
anymore, it’s too boring, because you outgrown that, right? And so you can imagine
that when we get smarter there are other things,
there are other challenges. – [Neil] That we don’t even know about. – That we don’t even know of right now. And then the flip side of that is, if you really think that
there’s really value to being imperfect,
there’s a that program. (group laughing) So make it more challenging. – Okay, take it back. – All right, should we
do another question? – [Neil] Real quick. Another question, go. – Okay, here we go. Patrick Lin, Facebook,
“Are there any red lines “that we should not cross,
or maybe never cross “in science and in ethics?” And a related question, “Are
there any ethical red lines “today, that you think
should be rolled back?” – Ooh, good one. And we don’t have time to answer that. We have to take a break. When we come back, the red line. Should you cross it or not? On StarTalk. – We’re back on StarTalk, bioethics, is the subject of this
edition of cosmaquarius. Matthew Liao, you’re our ethicist. You’re head of a whole
center, for bioethics. So everybody comes to
you with their problems. Is that how that works? And always good to have you Paul. When we left off, there
was a question about crossing red lines. Yeah, this is Patrick Lin,
Facebook, “Are there any “red lines that we should not cross.” And a related question is,
“Have there been any red lines “that you feel we’ve crossed
that should be rolled back?” – Yeah, well I think
there’s many red lines that we shouldn’t cross, so creating humans, that will
be slave humans for example, that’s an obvious one. – Doesn’t that happen anyway if you create humans who are perfect? Then the humans who
are not created perfect are left as slave to the perfect one. – Boy you really hate perfection. – No! Then you’re making a slave class without purposely making a slave class. – Yeah, so there’s this view that, even in our society now people
have differential abilities, but we think that everybody, they all have the same moral
status, and we could still– – [Neil] Equal under the eyes of the law. – That’s right. And so, we could still have that, even if you some people who are perfect and other people not as perfect. – Who would be enslaved by them (laughs). And how about red lines
that we have crossed that you would roll back today? I got one. I’m old enough, I’m
older than all of y’all, To remember the announcement of the first test tube
baby, that was born. That was banner headlines. Test tube baby. And today, that’s not even an
interesting point to raise, on a first date. Whether you are in vitro
or in utero conceived. – What were you like dating? Is that your opening line? – No but, there was a day
that might have been a thing, Yeah, I’m a test tube baby. That was like, wow, tell me about it. – [Paul] It’s a really good point. – Right, and back then
people say are we playing God by fertilizing eggs in a test tube? And now it’s like of
course you’re doing it. This is the fertility aid
that goes on every day, for so many couples. So, I bet that that would be
a line that existed back then, that we crossed, and now
you would roll it back, because we’re all just accustomed to it. Would you agree? – Actually we just ran a
conference on the ethics of donor conception two weeks ago at NYU and there were all these
donor conceived individuals and they were saying that
they shouldn’t have been born. – Should not have been born? – Should not have been born. – Why? – Yeah, because they feel
that they don’t know who their genetic parents are,
they feel very isolated. There’s just a lot of psychological– – Well this idea of God, I
mean if you’re an atheist, I was curious about this, where does religion creep into this? So people start to go well– – Because ethics panels
typically have a pastor, or somebody, that brings
a religious philosophy to the argument. – Religion is not a part
of my life on any level, why am I leaving to some ephemeral being? – That explains everything about you. (group laughing) – I’m soulless everybody. That’s my tour, the soulless
stand up comedy tour. – [Neil] You’re going to hell. – I am. Pretty good religion– – So how does religion fold into this? Religious ethics I guess. – [Matthew] So some
people look at ethics from a religious standpoint, so
there’s divine command theory, what would God do? Or what would God command
in certain situations? So they would look at these
issues from that angle. – [Neil] Speaking for God,
on the assumption that they understand the mind of God, for having read books, that
they presume God wrote. (group laughing) – Just wanna clarify. – Well there’s a view that
there’s the natural law of you that, what God would want
is what our best reasoning, whatever we come up
with our best reasoning. – [Neil] At the time. – Yeah, at the time. So that’s sort of a natural law type view. – And you alluded to this
bringing a perfect person into the world, just like
the bioethics and whatever, and but then you look
at the world we live in, okay we’re gonna make
genetically enhanced corn so we have better nutrition, so that we’re in better
shape to kill each other. I just feel like it’s– – Here we do. We need a gene for
rational thought (laughs). Let’s work on that one okay, get your people to– – And this trademark, yeah. – Well there are a lot
of people talking about moral enhancement. Can we enhance ourselves
morally so that we’re, less aggressive, and more sympathetic, and empathetic to the
plights of others, etc. – Eh I say, screw other people. Should we do another one? – [Neil] Next question. Yeah, go for it. – We are gonna go to
Dixon Clinton, Instagram, “Combining CRISPR and ever advancing AI, “will be the downfall of humankind right? “How many years do I have
before I’m being murdered by “cyborg overlords?” Wow, you gotta stop going to the movies. – Yeah, so when do we all die? – Well we’re all gonna die. So some people like Ray
Kurzweil thinks that by 2050 we’ll have superintelligence, other AI scientists– – Ray Kurzweil, we have
interviewed him on StarTalk, in a live taping. Go on. – Yeah, and so other people
say, they’re less optimistic but they think that
maybe by 2100 we’ll have superintelligence. And so there’s a real life issue, what happens when you have
these really smart AI’s that are smarter than us? – We become their slaves. – We’d become their slaves
if we’re lucky, then maybe– – Well maybe we’ll just become their pets. – Or maybe we’ll grow out of existence. – I can see me sniffing your butt. (group laughing) Maybe I went too far there, I’m sorry. All right this is Chris Cherry, Instagram. Hi Chris, from the
sunshine coast, Australia, not Austria, Australia. “Should we fear DNA
sample’s being required by “health insurance companies and employers? “Potentially you could
be discriminated against “because of something you
have no control over.” Yeah, Chris, it’s called
race and ethnicity. It’s happening every day. You alluded to this about
the insurance companies. – Yeah absolutely. I think that’s a real worry
that, as more and more of our information are available through genetic testing, etc., companies might use that
in inappropriate ways or unethical ways. – So an ethics board would say, “No, insurance companies
will not have access “to your DNA.” – [Matthew] That’s right. Or maybe a society, maybe that’s something that’s beyond a ethics board. – So don’t leave a coffee
cup that you sipped from in the insurance office. Cause maybe then– – They might take they swab. – Swabs. – Swab it and send it to– – Just show up completely,
hazmat suit and gloves. – All right, we gotta
go to lightning round. Okay, ready? So you’re gonna ask me a question and Matthew you have to
answer it in a sound bite. Pretend you’re on the evening news and they’re only gonna sound bite you. – Okay. (ding) – Okay, this is Justin
Willden from Instagram, “What’s your opinion on ethics of “manipulation/creation of AI in general? “Could we manipulate with it so far “to come close to something resembling “our own consciousness?” – Not yet. – When? – It’s hard to say. I don’t think we’ve figured
out what consciousness is or the biological substrates
of consciousness, to be able to do that yet. None of the machine learning
technologies right now can do that. – The day we understand consciousness, how soon after that do we
program that into computers? The next day. Okay, next question. – This is dajanero, Instagram,
“Do you think AI in humans “will be integrated or
DNA editing can be used “to create superhumans
like we see in X-Men?” – I’d like that because,
if you can edit the DNA, what do you need the computer for? That’s the question. – So the computers might be faster, so they have more bandwidth
so the brain is very slow, it thinks very slowly,
so you can imagine that, once you can augment through some sort of brain-computer interface,
it gives you vast amounts of storage, space, capacity, upgrade– – And perfect memory. It’s none of this arguing
about what happened. – Exactly, I said this. No I didn’t forget to buy the milk. – Let’s go to the videotape. This is a theme on many
episodes of “Black Mirror” by the way. You should check it out on Netflix. (ding) Okay, next. – Galaxystargirlxbox,
Instagram, “Do you think that– – Whoa! Excellent, I love it. – And there’s an underscore,
but I left that out, “Do you think the future of
AI in society will bring about “the less need for doctors? “I believe doctors will still be needed, “just in fewer numbers.” – Yeah, I’m not sure about the numbers but we’re gonna have wearable stuff, that are gonna be able
to track our heartbeats, our toilets are gonna be
able to analyze our stool, and tell us whether we’re healthy or not, and then that’s gonna be sent to doctors. – Do you want your toilet
talking about your poop? (group laughing) That’s what he just said – If my toilet could, it would throw up. – So but I think it’s coming. Smart toilets are coming. So that’s the next business. (ding) – Next, okay. – [Paul] This is from Christen
Versailles, Instagram, “I would like to know,
what are the considerations “to judge something as
“good” or “bad” in the aspect “of modifying an organism
genetically, humans for instance?” – So I have this view
that humans need some basic capacities, things like the ability
to be able to think, to have deep personal
relationships, and things like that, and so I think that whatever we do with genetic modification, we
shouldn’t interfere with those core human capacities, and the flip side of that is if an embryo, like an
offspring doesn’t have those capacities, then
we should try to make sure that they those– – [Neil] In whatever
genetic way, if possible. And beyond that it’s just luxury items off of a shopping list (laughing). (ding) All right, next. We’ve got time for one more. – Here we go. – Better be a good one, dude. – Wow, there’s a lot of pressure here. Okay, this is Dagan Pleak, Instagram, “Will we attempt to splice human DNA with “other animal DNA to
make mutants of a sort? “Would this conflict with our ethics “and what are you thoughts on creating “new humanoid species?” It’s called the centaur isn’t it? (group laughing) – Or a minotaur. – Yeah, that’s a great question. So it relates to what I just said earlier. I think as long as we don’t affect those core fundamental capacities,
sometimes we might look into these type of augments, these
combining different genes. – What animal would you
want to splice with a human? – I can tell you. – Would you wanna be? Let me guess. A dog, so you could sniff. – In my concluding
remarks, I will tell you. Yes. So we got time for just
some reflective thoughts. So Paul why don’t you go first. – I just think that all of these questions that you deal with, it’s
endlessly fascinating and on some level open-ended. You seem to have the most
subjective job in a way. – Plus you’re like the
calmest person I’ve ever met. – Which means you’re up to no good. – He’s hiding something. – Because you know
something that we don’t. And, we didn’t get much into this, but I know you’ve done a lot of work with manipulation of memory for
PTSD, rape victims, etc., and erasing thought. Is that making advancements– – [Neil] Was that part of your TED Talk? – Yes. – And can I have it in September because I’m going to a reunion in high school and I wanna wipe out the memory of asking Renee Sherlock to the prom and getting turned down twice. I wanna wipe out her memory and mine. – And both memories. Oh yeah. – Take some propranolol with you. – I knew you were a drug dealer. – He’s got it. He’s got the drugs. – Is that fairly far along? – That’s pretty far along
but unfortunately you gotta take it within 12 hours of
asking someone to a prom so. (group laughing) – Oh, so it erases your short-term memory. – [Matthew] That’s right. It stops it from consolidating
into the long-term memory. There’s another thing,
something called Zip. So there’s this idea that– – I’m not consuming anything called Zip. (group laughing) – Zip erases everything. – Really? – Yeah. – Wow, I’ll see you after the show. – So, Matthew, give us some reflective concluding remarks here. – So, I think there are a
lot of these new technologies that are on the horizon, I think they have a lot of promises, but we also should also
be mindful of their ethical implications, and I they can– – Further keeping you employed. – That’s right. So that it keeps me employed. – That’s hilarious. He keeps raising issues
that aren’t issues. No that’s an issue, it is– – It’s not an issue yet, but it will be. – It’s an issue. My kids going to college
next year, it’s an issue. – Yeah and I think
ultimately our aim is to create human well-being,
human flourishing, and so we wanna make sure that
these technologies do that. – So here’s what I think,
not that anyone asked, – Wait, Neil what do you think? – [Neil] Thank you Paul! The fact that you can
cross-breed the genetics of different species at
all, we do this often in the food chain, is a
reminder that all life has some common DNA. So we should not be surprised that you can take a fish DNA and put it in a tomato. Just a reminder that we’re
all related, genetically. So what I think to myself is, the human form is not some
perfect example of life, I like the fact that newts
can regenerate their limbs. Where is the gene sequence for that? Let’s give that to humans
and give it first to veterans who have lost their legs or
arms and regrow our limbs. If a newt can do it and
we have genetic editing, why can’t we do it? – And why haven’t we? – [Neil] Well, maybe that’s
to come, but look at what is possible in the experimentation
of the biodiversity that is life on earth and say, why can’t we have some of that? And that is a thought from
the cosmic perspective. I wanna thank Matthew Liao. Second time on StarTalk, we
will bring you back for sure. – Thank you. – [Neil] All right and work hard, make a better world for us, or help us make a better
world for ourselves. – Keep creating issues
that aren’t really issues so you have a job. By the way, best mind eraser, vodka. It’s already been invented. – [Neil] Takes out
those cells right there. Paul, always good to have you. – Thank you, it’s a lot of fun. – [Neil] All right, I’ve
been and will continue to be Neil deGrasse Tyson, your
personal astrophysicist, coming to you from my office, at the Hayden Planetarium at the American Museum of Natural History, and as always I bid
you to keep looking up. (upbeat music)