sábado, 14 de febrero de 2015

¿Qué piensas, acerca de las máquinas que piensan?




http://edge.org/annual-question/what-do-you-think-about-machines-that-think
 What Do You Think About Machines That Think?
 ¿Qué piensas, acerca de las máquinas que piensan?



El reto de las máquinas pensantes | Ciencia | EL MUNDO

http://www.elmundo.es/ciencia/


HAL's camera eye
El robot HAL en '2001: Una Odisea del espacio', de Kubrick, un icono de la inteligencia artificial.

"¿Qué piensa usted sobre las máquinas que piensan?" Ésta es la pregunta que la revista digital Edge ha lanzado, como todos los años por estas fechas, a algunas de las mentes más brillantes del planeta. Hace poco más de un mes, a principios de diciembre, Stephen Hawking alertó sobre las consecuencias potencialmente apocalípticas de la inteligencia artificial, que en su opinión podría llegar a provocar "el fin de la especie humana". Pero, ¿realmente debemos temer el peligro de un futuro ejército de humanoides fuera de control? ¿O más bien deberíamos celebrar las extraordinarias oportunidades que podría brindarnos el desarrollo de máquinas pensantes, e incluso sintientes? Semejantes seres, además, nos plantearían nuevos dilemas éticos. ¿Formarían parte de nuestra "sociedad"? ¿Deberíamos concederles derechos civiles? ¿Sentiríamos empatía por ellos? Un año más, algunos de los pensadores y científicos más relevantes del mundo han aceptado el reto intelectual planteado por el editor de Edge, John Brockman. Ésta es tan sólo una selección de algunas de las respuestas más interesantes.

Nick Bostrom. Director del Instituto para el Futuro de la Humanidad de Oxford:

Creo que, en general, la gente se precipita al dar su opinión sobre este tema, que es extremadamente complicado. Hay una tendencia a adaptar cualquier idea nueva y compleja para que se amolde a un cliché que nos resulte familiar. Y por algún motivo extraño, muchas personas creen que es importante referirse a lo que ocurre en diversas películas y novelas de ciencia ficción cuando hablan del futuro de la inteligencia artificial. Mi opinión es que ahora mismo, a las máquinas se les da muy mal pensar (excepto en unas pocas y limitadas áreas). Sin embargo, algún día probablemente lo harán mejor que nosotros (al igual que las máquinas ya son mucho más fuertes y rápidas que cualquier criatura biológica). Pero ahora mismo hay poca información para saber cuánto tiempo tardará en surgir esta superinteligencia artificial. Lo mejor que podemos hacer ahora mismo, en mi opinión, es impulsar y financiar el pequeño pero pujante campo de investigación que se dedica a analizar el problema de controlar los riesgos futuros de la superinteligencia. Será muy importante contar con las mentes más brillantes, de tal manera que estemos preparados para afrontar este desafío a tiempo.

Daniel C. Dennett. Filósofo en el Centro de Estudios Cognitivos de la Universidad de Tufts

La Singularidad -el temido momento en el que la Inteligencia Artificial (IA) sobrepasa la inteligencia de sus creadores- tiene todas las características clásicas de una leyenda urbana: cierta credibilidad científica ("Bueno, en principio, ¡supongo que es posible!") combinada con un deliciosamente escalofriante clímax ("¡Nos dominarán los robots!"). Tras décadas de alarmismo sobre los riesgos de la IA, podríamos pensar que la Singularidad se vería a estas alturas como una broma o una parodia, pero ha demostrado ser un concepto extremadamente persuasivo. Si a esto le añadimos algunos conversos ilustres (Elon Musk, Stephen Hawking...), ¿cómo no tomárnoslo en serio? Yo creo, al contrario, que estas voces de alarma nos distraen de un problema mucho más apremiante. Tras adquirir, después de siglos de duro trabajo, una comprensión de la naturaleza que nos permite, por primera vez en la Historia, controlar muchos aspectos de nuestro destino, estamos a punto de abdicar este control y dejarlo en manos de entes artificiales que no pueden pensar, poniendo a nuestra civilización en modo auto-piloto de manera prematura. Internet no es un ser inteligente (salvo en algunos aspectos), pero nos hemos vuelto tan dependientes de la Red que si en algún momento colapsara, se desataría el pánico y podríamos destruir nuestra sociedad en pocos días. El peligro real, por lo tanto, no son máquinas más inteligentes que nosotros, que podrían usurpar nuestro papel como capitanes de nuestro destino. El peligro real es que cedamos nuestra autoridad a máquinas estúpidas, otorgándoles una responsabilidad que sobrepasa su competencia.

Frank Wilczek. Físico del Massachussetts Institute of Technology (MIT) y Premio Nobel

Francis Crick la denominó la "Hipótesis Asombrosa": la conciencia, también conocida como la mente, es una propiedad emergente de la materia. Conforme avanza la neurociencia molecular, y los ordenadores reproducen cada vez más los comportamientos que denominamos inteligentes en humanos, esa hipótesis parece cada vez más verosímil. Si es verdad, entonces toda inteligencia es una inteligencia producida por una máquina [ya sea un cerebro o un sistema operativo]. Lo que diferencia a la inteligencia natural de la artificial no es lo que es, sino únicamente cómo se fabrica. David Hume proclamó que "la razón es, y debería ser, la esclava de las pasiones" en 1738, mucho antes de que existiera cualquier cosa remotamente parecida a la moderna inteligencia artificial. Aquella impactante frase estaba concebida, por supuesto, para aplicarse a la razón y las pasiones humanas. Pero también es válida para la inteligencia artificial: el comportamiento está motivado por incentivos, no por una lógica abstracta. Por eso la inteligencia artificial que me parece más alarmante es su aplicación militar: soldados robóticos, drones de todo tipo y "sistemas". Los valores que nos gustaría instalar en esos entes tendrían que ver con la capacidad para detectar y combatir amenazas. Pero bastaría una leve anomalía para que esos valores positivos desataran comportamientos paranoicos y agresivos. Sin un control adecuado, esto podría desembocar en la creación de un ejército de paranoicos poderosos, listos y perversos.

John C. Mather. Astrofísico del Centro Goddard de la NASA y Premio Nobel

Las máquinas que piensan están evolucionando de la misma manera que, tal y como nos explicó Darwin, lo hacen las especies biológicas, mediante la competición, el combate, la cooperación, la supervivencia y la reproducción. Hasta ahora no hemos encontrado ninguna ley natural que impida el desarrollo de la inteligencia artificial, así que creo que será una realidad, y bastante pronto, teniendo en cuenta los trillones de dólares que se están invirtiendo por todo el mundo en este campo, y los trillones de dólares de beneficios potenciales para los ganadores de esta carrera. Los expertos dicen que no sabemos suficiente sobre la inteligencia como para fabricarla, y estoy de acuerdo; pero un conjunto de 46 cromosomas tampoco lo entiende, y sin embargo es capaz de dirigir su creación en nuestro organismo. Mi conclusión, por lo tanto, es que ya estamos impulsando la evolución de una inteligencia artificial poderosa, que estará al servicio de las fuerzas habituales: los negocios, el entretenimiento, la medicina, la seguridad internacional, la guerra, y la búsqueda de poder a todos los niveles: el crimen, el transporte, la minería, la industria, el comercio, el sexo, etc. No creo que a todos nos gusten los resultados. No sé si tendremos la inteligencia y la imaginación necesaria para mantener a raya al genio una vez que salga de la lámpara, porque no sólo tendremos que controlar a las máquinas, sino también a los humanos que puedan hacer un uso perverso de ellas. Pero como científico, me interesa mucho las potenciales aplicaciones de la inteligencia artificial para la investigación. Sus ventajas para la exploración espacial son obvias: sería mucho más fácil para estas máquinas pensantes colonizar Marte, e incluso establecer una civilización a escala galáctica. Pero quizás no sobrevivamos el encuentro con estas inteligencias alienígenas que fabriquemos nosotros mismos.

Stephen Pinker. Catedrático de Psicología en la Universidad de Harvard

Un procesador de información fabricado por el ser humano podría, en principio, superar o duplicar nuestras propias capacidades cerebrales. Sin embargo, no creo que esto suceda en la práctica, ya que probablemente nunca exista la motivación económica y tecnológica necesaria para lograrlo. Sin embargo, algunos tímidos avances hacia la creación de máquinas más inteligentes han desatado un renacimiento de esa ansiedad recurrente basada en la idea de que nuestro conocimiento nos llevará al apocalipsis. Mi opinión es que el miedo actual a la tiranía de los ordenadores descontrolados es una pérdida de energía emocional; el escenario se parece más al virus Y2K que al Proyecto Manhattan. Para empezar, tenemos mucho tiempo para planificar todo esto. Siguen faltando entre 15 y 25 años para que la inteligencia artificial alcance el nivel del cerebro humano.Es cierto que en el pasado, los «expertos» han descartado la posibilidad de que surjan ciertos avances tecnológicos que después emergieron en poco tiempo. Pero lo contrario también es cierto: los «expertos» también han anunciado (a veces con gran pánico) la inminente aparición de avances que después jamás se vieron, como los coches impulsados por energía nuclear, las ciudades submarinas, las colonias en Marte, los bebés de diseño y los almacenes de zombis que se mantendrían vivos para suministrar órganos de repuesto a personas enfermas. Me parece muy extraño pensar que los desarrolladores de robots no incorporarían medidas de seguridad para controlar posibles riesgos. Y no es verosímil creer que la inteligencia artificial descenderá sobre nosotros antes de que podamos instalar mecanismos de precaución. La realidad es que el progreso en el campo de la inteligencia artificial es mucho más lento de lo que nos hacen creer los agoreros y alarmistas. Tendremos tiempo más que suficiente para ir adoptando medidas de seguridad ante los avances graduales que se vayan logrando, y los humanos mantendremos siempre el control del destornillador. Una vez que dejamos a un lado las fantasías de la ciencia ficción, las ventajas de una inteligencia artificial avanzada son verdaderamente emocionantes, tanto por sus beneficios prácticos, como por sus posibilidades filosóficas y científicas.
  llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll
 http://upload.wikimedia.org/wikipedia/en/d/d5/I_robot.jpg
 BLOG Mono Pensante

Yo, máquina pensante












Un año más, la revista digital Edge ha vuelto a estimular un apasionante debate intelectual de gran altura, con la pregunta anual que lanza por estas fechas a algunas de las mentes más brillantes de nuestro tiempo. En esta ocasión, su brillante editor John Brockman ha planteado el reto de diseccionar las luces y sombras de la inteligencia artificial (IA): «¿Qué piensa usted sobre las máquinas que piensan?» Las respuestas reflejan una amplísima gama de opiniones entre algunos de los grandes científicos y pensadores del mundo actual, demostrando que no hay un consenso claro a la hora de evaluar hasta qué punto debemos celebrar o temer el surgimiento de las máquinas pensantes.
En un extremo encontramos al gran filósofo estadounidense Daniel C. Dennett, que se burla con mucha sorna de la «leyenda urbana» según la cual «los robots nos dominarán» en un futuro próximo. En el otro hallamos a científicos de la talla del astrofísico de la NASA y Premio Nobel John C. Mather, quien está convencido de que la inteligencia artificial «será una realidad, y bastante pronto», teniendo en cuenta la cantidad descomunal de dinero que ya se está invirtiendo en este campo, y los enormes beneficios potenciales que esperan a los emprendedores que construyan los primeros ordenadores con inteligencia humana (o sobrehumana).
Sin embargo, aunque los expertos no se ponen de acuerdo a la hora de predecir si falta mucho o poco tiempo para la era de la IA, sí existe un consenso muy amplio sobre la imparable llegada, antes o después, de esta revolución. El motivo lo explica muy bien el físico y también Premio Nobel Frank Wilczek, citando la famosa «hipótesis asombrosa» del codescubridor del ADN, Francis Crick: la mente humana no es más que «una propiedad emergente de la materia», y por lo tanto «toda inteligencia es una inteligencia producida por una máquina» (ya sea un cerebro formado por neuronas o un robot fabricado con chips de silicio).
Como me dijo en una inolvidable entrevista el gran neurocientífico español Rafael Yuste: "Dentro del cráneo no hay magia, la mente humana y todos nuestros pensamientos, nuestros recuerdos y nuestra personalidad, todo está basado en disparos de grupos de neuronas. No hay nada más, no hay un espíritu en el éter... Lo que hay es un gran desconocimiento sobre cómo funciona esta máquina. Pero estoy seguro de que la conciencia surge del sustrato físico que tenemos en el encéfalo".
Y por eso, como dice el biólogo George Church en su propia respuesta a la pregunta de Edge, «yo soy una máquina que piensa, hecha de átomos». Si esto es verdad, la aparición de otro tipo de máquinas que también puedan pensar es sólo cuestión de tiempo.

 llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll

¿Qué piensa sobre las máquinas que piensan? - La Nación

 
Debates
Más de 180 científicos, filósofos, escritores y tecnólogos respondieron a la convocatoria anual del sitio web Edge.org con reflexiones originales sobre los alcances, riesgos y posibilidades de la inteligencia artificial, un campo de vanguardia en la ciencia que ya está trayendo el futuro al presente











La inteligencia artificial, ¿es uno de los avances más prometedores de la ciencia contemporánea, o un riesgo para la humanidad? Entre esos dos polos, con ironía, optimismo o cautela, se movieron los 186 científicos, escritores y pensadores convocados este año por Edge.org -un sitio web asociado a una editorial que promueve el pensamiento y la discusión de vanguardia en ciencias, artes y literatura- para responder a su pregunta anual. Los colaboradores escribieron ensayos breves, disponibles en la web (www.edge.org) y que, como todos los años, tendrán en breve su publicación en papel. Aquí, una selección de sus respuestas.

Mentes que existen junto a las nuestras

Pamela McCorduck (escritora, autora de Machines that think)
Durante más de cincuenta años he observado el flujo y reflujo de la opinión pública respecto de la inteligencia artificial (IA): es imposible y no puede lograrse; es algo horrendo y destruirá a la raza humana; es significativo; no tiene importancia; es un chiste; nunca será sólidamente inteligente, sólo débilmente; producirá otro Holocausto. Estos extremos últimamente han cedido ante el reconocimiento de que la IA es un fenómeno científico, tecnológico y social -humano- que hace época. Hemos desarrollado una nueva mente para que exista junto a la nuestra. Si la manejamos sabiamente puede dar inmensos beneficios, desde lo planetario hasta lo personal. (...)
Ninguna ciencia o tecnología novel de tal magnitud llega sin desventajas, incluso peligros. Reconocerlos, medirlos y responder a ellos es una tarea de grandes proporciones. Al contrario de lo que dicen los titulares, esa tarea ya ha sido asumida formalmente por expertos en este campo, por los que mejor entienden el potencial y los límites de la IA. En un proyecto llamado AI100, en Stanford, expertos científicos junto a filósofos, especialistas en ética, estudiosos del derecho y otros formados para explorar los valores más allá de las reacciones viscerales, emprenderán este esfuerzo (...)
Esto es lo que creo: deseamos salvarnos y preservarnos como especie. En vez de las deidades imaginarias a las que hemos pedido a lo largo de la historia, que no nos han salvado ni protegido -de la naturaleza, los unos de los otros, de nosotros mismos- finalmente estamos listos para apelar a nuestras propias mentes mejoradas y aumentadas. Es señal de madurez social que asumamos la responsabilidad por nosotros mismos. Somos como dioses, dijo Stewart Brand en una frase famosa, y ya que estamos deberíamos hacerlo bien.

Que piensen por nosotros es el Paraíso

Virgina Heffernan (columnista del New York Times Magazine)
Tercerizar a las máquinas las muchas idiosincrasias de los mortales -cometer errores interesantes, rumiar las verdades, aplacar a los dioses cortando y arreglando flores- se inclina hacia lo trágico. ¿Pero dejar que las máquinas piensen por nosotros? Eso suena al paraíso. Pensar es opcional. Pensar es sufrir. Es casi siempre un modo de ser cuidadoso, de prestar atención hipervigilante, de resentir el pasado y temer el futuro en la forma de un lenguaje interno enloquecedoramente redundante. Si las máquinas pueden relevarnos de esta no-responsabilidad onerosa, que en demasiados de nosotros funciona a una sobremarcha sin sentido, estoy a favor.
Dejemos que las máquinas perseveren en dar respuesta a interrogantes tediosos cargados de valores acerca de si la escuela privada o pública es la indicada para mis hijos; si la intervención en Siria es "apropiada"; si lo "peor" para un organismo son los gérmenes o la soledad. Esto nos liberará a los humanos para que con despreocupación nos dediquemos a jugar, descansar, escribir y cortar flores, los estados de flujo enriquecedores que producen acciones que realmente enriquecen, vivifican y sanan al mundo.

En la práctica y la filosofía, son positivas

Steven Pinker (psicólogo experimental y cognitivo, lingüista, profesor en Harvard)
(...) Algunos recientes pasos mínimos hacia máquinas más inteligentes han llevado a revivir una recurrente ansiedad respecto de que nuestro conocimiento nos condenará. Mi visión es que los temores actuales de que las computadoras produzcan desastres son un desperdicio de energía emocional, y que ese escenario se acerca más a las falsas preocupaciones por la supuesta falla catastrófica de las computadoras al llegar el año 2000 (conocido en inglés como la falla Y2K, n. del t.) que al Proyecto Manhattan (que llevó a la creación de la bomba atómica, n. del t.)
Por empezar, tenemos mucho tiempo para planificar esto. La IA de nivel humano aún está a 15-25 años de distancia, como siempre lo ha estado, y muchos de sus avances difundidos recientemente tienen raíces cortas. Es cierto que en el pasado varios "expertos" han rechazado cómicamente la posibilidad de avances tecnológicos que se dieron rápidamente. Pero esto va en los dos sentidos: ha habido "expertos" que anunciaron (o se asustaron de) avances inminentes que nunca se dieron, como autos a energía nuclear, ciudades submarinas, colonias en Marte, bebes diseñados y depósitos de zombis que se mantienen vivos para proveer a la gente órganos de repuesto. (...)
Una vez que dejamos de lado las tramas de desastres de ciencia ficción, la posibilidad de la inteligencia artificial avanzada es algo para entusiasmarse y no sólo por sus beneficios prácticos, como los avances fantásticos en materia de seguridad, ocio y protección del medio ambiente incorporada a los automóviles sin chofer, sino también por sus posibilidades filosóficas. La teoría computacional de la mente nunca explicó la existencia de la conciencia en el sentido de la subjetividad en primera persona (aunque es perfectamente capaz de explicar la existencia de conciencia en el sentido de información accesible y reproducible). Una sugerencia es que la subjetividad es inherente a cualquier sistema cibernético suficientemente complicado. Solía creer que esta hipótesis (y sus alternativas) eran permanentemente indemostrables. Pero imaginemos un robot inteligente programado para monitorear sus propios sistemas y plantear interrogantes científicos. Si, sin impulso exterior, se preguntara por qué tiene experiencias subjetivas, yo tomaría la idea con seriedad.

Nos liberan para ser más humanos

Irene Pepperberg (psicóloga y etóloga, profesora en Harvard)
Si bien las máquinas son maravillosas en computación, no son demasiado buenas para pensar.
Las máquinas tienen una disponibilidad inagotable de resistencia y perseverancia y, como otros han dicho, sin esfuerzo producen la respuesta a un problema matemático complicado o nos orientan a través del tráfico en una ciudad desconocida, todo en base a algoritmos y programas instalados por humanos. ¿Pero qué les falta a las máquinas?
Las máquinas (al menos hasta ahora y no creo que esto cambie singularmente) no tienen visión. Y no quiero decir vista. Las máquinas no inventan por su cuenta la nueva aplicación exitosa. Las máquinas no deciden explorar galaxias distantes; hacen un gran trabajo cuando las enviamos, pero esa es una historia diferente. Las máquinas por cierto son mejores que la persona promedio para resolver problemas de cálculo y mecánica cuántica, pero no tienen la visión para comprender la necesidad de hacerlo. Las máquinas pueden ganarles a los humanos al ajedrez, pero no han diseñado el tipo de juego mental que llame la atención a los humanos durante siglos. Las máquinas pueden ver regularidades estadísticas que mi débil cerebro no percibe, pero no pueden dar el salto visionario que conecta conjuntos de datos dispares para definir un nuevo campo. (...)
Mi preocupación por tanto no son las máquinas que piensan, sino más bien una sociedad complaciente, que podría renunciar a sus visionarios a cambio simplemente de eliminar el trabajo pesado. Los humanos tienen que aprovechar toda la capacidad cognitiva que se libera cuando las máquinas se hacen cargo del trabajo aburrido -y estar agradecidos de esa liberación y usar esa liberación- para canalizar toda esa capacidad hacia el duro trabajo de resolver los problemas urgentes que demandan saltos perspicaces y visionarios.

Es hora de que lleguen a la madurez

Thomas A. Bass (escritor, profesor de literatura e historia)
Pensar es bueno. Entender es mejor. Crear es lo mejor. Estamos rodeados de máquinas cada vez más pensantes. El problema está en su carácter mundano. Piensan en aterrizar aviones y vender cosas. Piensan en vigilancia y censura. Su pensamiento es simple, si no nefasto. Se dijo que el año pasado una computadora pasó la Prueba Turing. Pero la aprobó como un chico de trece años, lo que está bastante bien, considerando las preocupaciones de nuestras máquinas inmaduras. No veo la hora de que nuestras máquinas maduren, para que tengan más poesía y humor. Éste debería ser el proyecto artístico del siglo, con fondos del Estado, fundaciones, universidades, empresas. Todos tienen intereses en lograr que nuestro pensamiento sea más reflexivo, que aumente nuestra comprensión y genere nuevas ideas. Últimamente hemos tomado muchas decisiones tontas, basadas en mala información o demasiada información o la incapacidad de comprender lo que la información significa.
Tenemos numerosos problemas por confrontar y soluciones para encontrar. Empecemos a pensar. Empecemos a crear. Pidamos más funk, más soul, más poesía y arte. Reduzcamos la vigilancia y las ventas. Necesitamos más programadores artistas y programación artística. Es hora de que nuestras máquinas que piensan superen una adolescencia que ha durado más de sesenta años.

No son artificiales, son diseñadas

Paul Davies (físico teórico, cosmólogo, investigador)
Los debates acerca de la IA nos remontan a la década de 1950 y es hora de dejar de usar el término "artificial" en relación a la IA por completo. Lo que realmente queremos decir es "Inteligencia Diseñada" (ID). En el habla popular, palabras como "artificial" y "máquina" se usan como opuesto a "natural" y refieren a robots metálicos, circuitos electrónicos y computadoras digitales por oposición a organismos biológicos vivos, pulsantes, pensantes. La idea de un aparato metálico con tripas cableadas con derechos o que desobedece las leyes humanas no sólo da escalofríos, es absurdo. Pero decididamente este no es el rumbo que lleva la ID.
Muy pronto va a desvanecerse la distinción entre artificial y natural. La Inteligencia Diseñada va a basarse cada vez más en la biología sintética y en la fabricación orgánica, en la que los circuitos neurales se desarrollarán a partir de células modificadas genéticamente y se ordenarán espontáneamente en redes de módulos funcionales. Inicialmente los diseñadores serán humanos, pero pronto serán reemplazadas por sistemas ID más inteligentes, lo que desatará un proceso descontrolado de complejización.
A diferencia del cerebro humano con vínculos laxos vía canales de comunicación, los sistemas ID estarán conectados de modo directo y abarcante, aboliendo todo concepto de "seres" individuales y elevando el nivel de la actividad cognitiva ("pensar") a alturas sin precedentes (...).
En caso de que no estemos solos en el universo, no debemos esperar comunicarnos con seres inteligentes del tipo tradicional de corporeidad y sangre que presenta la ciencia ficción, sino con una ID de muchos millones de años de edad con poder intelectual inimaginable y objetivos incomprensibles.

Un poder limitado, con gran riesgo

Nicholas G. Carr (escritor sobre tecnología, cultura y negocios)
Las máquinas que piensan piensan como máquinas. Este hecho puede ser una desilusión para los que anticipan, temerosos o entusiasmados, un alzamiento de robots. Para la mayoría es algo tranquilizador. Nuestras máquinas pensantes no están por superarnos intelectualmente de un salto, y mucho menos nos convertirán en sus sirvientes o mascotas. Van a seguir haciendo lo que les piden sus programadores humanos.
Gran parte del poder de la inteligencia artificial deriva de su inconciencia. Inmunes a las vaguedades y prejuicios que son parte del pensamiento consciente, las computadoras pueden realizar cálculos a velocidad del rayo sin distracción, ni fatiga, duda o emoción. (...)
Las cosas se ponen complicadas cuando queremos que las computadoras actúen no como nuestras ayudantes, sino como nuestro reemplazo. Eso es lo que sucede ahora y rápidamente.
Las máquinas pensantes de hoy pueden percibir el medio que las rodea, aprender de la experiencia y tomar decisiones de modo autónomo, a menudo con una velocidad y precisión que van más allá de nuestra capacidad de comprenderlas, mucho menos equipararlas. Cuando se les permite actuar por su cuenta en un mundo complejo, corporizadas en robots o sólo emitiendo juicios derivados de algoritmos, las máquinas inconcientes conllevan enormes riesgos junto con su enorme poder. (...)
Lo que ahora nos debatimos por poner bajo control es precisamente lo que nos ayudó a retomar el control a comienzos del siglo XX: la tecnología informática. Nuestra capacidad de reunir y procesar datos, manipular información en todas sus formas, ha superado nuestra capacidad de monitorear y regular el procesamiento de datos de un modo que sirva a nuestros intereses sociales y personales. El primer paso para responder a este desafío es reconocer que los riesgos de la inteligencia artificial no corresponden a un futuro distópico. Están aquí ahora.
Traducción: Gabriel Zadunaisky.
llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll





 llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll










WHAT DO YOU THINK ABOUT MACHINES THAT THINK?


  "Dahlia" by Katinka Matson www.katinkamatson.com

 "Un año más, y algunos de los pensadores y científicos del mundo más importantes han aceptado el desafío intelectual." -El Mundo, 2015
 "Deliciosamente creativo, la variedad asombra. Cohetes estelares Intelectuales de impresionante brillantez. Nadie en el mundo está haciendo lo que Edge (Al Borde) está haciendo ... la mayor universidad virtual de investigación en el mundo. —Denis Dutton, Founding Editor, Arts & Letters Daily


_________________________________________________________________
 Dedicado a la memoria de Frank Schirrmacher (1959-2014).
   En los últimos años, la década de 1980 - la era de las discusiones filosóficas acerca de la inteligencia artificial (AI) - si las computadoras pueden "realmente" pensar, se refiere, a si son conscientes, y así - han dado lugar a nuevas conversaciones sobre cómo debemos tratar con las formas que actualmente se implementan. Estos "IA (AI)", si logran "Superintelligence (Super-Inteligencia" (Nick Bostrom), podrían suponer "riesgos existenciales" que nos llevarían a "Our Final Hour (Nuestra hora final)" (Martin Rees). Stephen Hawking recientemente en los titulares internacionales señaló "El desarrollo de la inteligencia artificial total podría significar el fin de la raza humana."
In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can "really" think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These "AIs", if they achieve "Superintelligence" (Nick Bostrom), could pose "existential risks" that lead to "Our Final Hour" (Martin Rees). And Stephen Hawking recently made international headlines when he noted "The development of full artificial intelligence could spell the end of the human race."  
_________________________________________________________________



THE EDGE QUESTION—2015 
WHAT DO YOU THINK ABOUT MACHINES THAT THINK?
But wait! Should we also ask what machines that think, or, "AIs", might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is "their" society "our" society? Will we, and the AIs, include each other within our respective circles of empathy?
Numerous Edgies have been at the forefront of the science behind the various flavors of AI, either in their research or writings. AI was front and center in conversations between charter members Pamela McCorduck (Machines Who Think) and Isaac Asimov (Machines That Think) at our initial meetings in 1980. And the conversation has continued unabated, as is evident in the recent Edge feature "The Myth of AI", a conversation with Jaron Lanier, that evoked rich and provocative commentaries.
Is AI becoming increasingly real? Are we now in a new era of the "AIs"? To consider this issue, it's time to grow up. Enough already with the science fiction and the movies, Star Maker, Blade Runner, 2001, Her, The Matrix, "The Borg".  Also, 80 years after Turing's invention of his Universal Machine, it's time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history. (See George Dyson's 2004 Edge feature "Turing's Cathedral".) So, once again, this time with rigor, the Edge Question—2015:
WHAT DO YOU THINK ABOUT MACHINES THAT THINK?
John Brockman
 Publisher & Editor, Edge
 llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll

 llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll


 llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll



 
Author, Machines Who Think, The Universal Machine, Bounded Rationality, This Could Be Important; Coauthor (with Edward Feigenbaum), The Fifth Generation
For more than fifty years, I've watched the ebb and flow of public opinion about artificial intelligence: it's impossible and can't be done; it's horrendous, and will destroy the human race; it's significant; it's negligible; it's a joke; it will never be strongly intelligent, only weakly so; it will bring on another Holocaust. These extremes have lately given way to an acknowledgment that AI is an epochal scientific, technological, and social—human—event. We've developed a new mind, to live side by side with ours. If we handle it wisely, it can bring immense benefits, from the planetary to the personal.
One of AI's futures is imagined as a wise and patient Jeeves to our mentally negligible Bertie Wooster selves: "Jeeves, you're a wonder." "Thank you sir, we do our best." This is possible, certainly desirable. We can use the help. Chess offers a model: Grandmasters Garry Kasparov and Hans Berliner have both declared publicly that chess programs find moves that humans wouldn't, and are teaching human players new tricks. If Big Blue beat Kasparov when he was one of the strongest world champion chess players ever, he and most observers believe that even better chess is played by teams of humans and machines combined. Is this a model of our future relationship with smart machines? Or is it only temporary, while the machines push closer to a blend of our kind of smarts plus theirs? We don't know. In speed, breadth, and depth, the newcomer is likely to exceed human intelligence. It already has in many ways.
No novel science or technology of such magnitude arrives without disadvantages, even perils. To recognize, measure, and meet them is a task of grand proportions. Contrary to the headlines, that task has already been taken up formally by experts in the field, those who best understand AI's potential and limits. In a project called AI100, based at Stanford, scientific experts, teamed with philosophers, ethicists, legal scholars and others trained to explore values beyond simple visceral reactions, will undertake this. No one expects easy or final answers, so the task will be long and continuous, funded for a century by one of AI's leading scientists, Eric Horvitz, who, with his wife Mary, conceived this unprecedented study.
Since we can't seem to stop, since our literature tells us we've imagined, yearned for, an extra-human intelligence for as long as we have records, the enterprise must be impelled by the deepest, most persistent of human drives. These beg for explanation. After all, this isn't exactly the joy of sex.
Any scientist will say it's the search to know. "It's foundational," an AI researcher told me recently. "It's us looking out at the world, and how we do it." He's right. But there's more.
Some say we do it because it's there, an Everest of the mind. Others, more mystical, say we're propelled by teleology: we're a mere step in the evolution of intelligence in the universe, attractive even in our imperfections, but hardly the last word.
Entrepreneurs will say that this is the future of making things—the dark factory, with unflagging, unsalaried, uncomplaining robot workers—though what currency post-employed humans will use to acquire those robot products, no matter how cheap, is a puzzle to be solved.
Here's my belief:  We long to save and preserve ourselves as a species. For all the imaginary deities throughout history we've petitioned, which failed to save and protect us—from nature, from each other, from ourselves—we're finally ready to call on our own enhanced, augmented minds instead. It's a sign of social maturity that we take responsibility for ourselves. We are as gods, Stewart Brand famously said, and we may as well get good at it.
We're trying. We could fail.



Author, Regenesis; Professor, Harvard University; Director, Personal Genome Project.

I am a machine that thinks, made of atoms—a perfect quantum simulation of a many-body problem—a 1029 body problem. I, robot, am dangerously capable of self-reprogramming and preventing others from cutting off my power supply. We human machines extend our abilities via symbiosis with other machines—expanding our vision to span wavelengths beyond the mere few nanometers visible to our ancestors, out to the full electromagnetic range from picometer to megameter. We hurl 370 kg hunks of our hive past the sun at 252,792 km/hr. We extend our memory and math by a billion-fold with our silicon prostheses. Yet our bio-brains are a thousand-fold more energy efficient than our inorganic-brains at tasks where we have common ground (like facial recognition and language translation) and infinitely better for tasks of, as yet, unknown difficulty, like Einstein’s Annus Mirabilis papers, or out-of-the-box inventions impacting future centuries. As Moore’s Law heads from 20-nm transistor lithography down to 0.1 nm atomic precision and from 2D to 3D circuits, we may downplay reinventing and simulating our biomolecular-brains and switch to engineering them.
We can back-up petabytes of sili-brains perfectly in seconds, but transfer of information between carbo-brains takes decades and the similarity between the copies is barely recognizable. Some speculate that we could translate from carbo to sili, and even get the sili version to behave like the original. However, such a task requires much deeper understanding than merely making a copy. We harnessed the immune system via vaccines in 10th century China and 18th century Europe, long before we understood cytokines and T-cell receptors. We do not yet have a medical nanorobot of comparable agility or utility. It may turn out that making a molecularly adequate copy of a 1.2 kg brain (or 100 kg body) is easier than understanding how it works (or than copying my brain to a room of students "multitasking" with smart phone cat videos and emails). This is far more radical than human cloning, yet does not involve embryos.
What civil rights issues arise with such hybrid machines?  A bio-brain of yesteryear with nearly perfect memory, which could reconstruct a scene with vivid prose, paintings or animation was permissible, often revered. But we hybrids (mutts) today, with better memory talents are banned from courtrooms, situation rooms, bathrooms and "private" conversations. Car license plates and faces are blurred in Google Street View—intentionally inflicting prosopagnosia. Should we disable or kill Harrison Bergeron? What about votes? We are currently far from universal suffrage. We discriminate based on maturity and sanity. If I copy my brain/body, does it have a right to vote, or is it redundant? Consider that the copies begin to diverge immediately or the copy could be intentionally different. In addition to passing the maturity/sanity/humanity test, perhaps the copy needs to pass a reverse-Turing test (a Church-Turing test?). Rather than demonstrating behavior indistinguishable from a human, the goal would be to show behavior distinct from human individuals. (Would the current US two-party system pass such a test?) Perhaps the day of corporate personhood (Dartmouth College v. Woodward – 1819) has finally arrived. We already vote with our wallets. Shifts in purchasing trends result in differential wealth, lobbying, R&D priorities, etc. Perhaps more copies of specific memes, minds and brains will come to represent the will of "we the (hybrid) people" of the world. Would such future Darwinian selection lead to disaster or to higher emphasis on humane empathy, aesthetics, elimination of poverty, war and disease, long-term planning—evading existential threats on even millennial time frames? Perhaps the hybrid-brain route is not only more likely, but also safer than either a leap to an unprecedented, unevolved, purely silicon-based brains—or sticking to our ancient cognitive biases with fear-based, fact-resistant voting.


Classical Scholar, University Professor, Georgetown University; Author, The Ruin of the Roman Empire; Pagans; Webmaster, St. Augustine's Website

1. "Thinking" is a word we apply with no discipline whatsoever to a huge variety of reported behaviors. "I think I'll go to the store" and "I think it's raining" and "I think therefore I am" and "I think the Yankees will win the World Series" and "I think I am Napoleon" and "I think he said he would be here, but I'm not sure," all use the same word to mean entirely different things. Which of them might a machine do someday?  I think that's an important question.
2. Could a machine get confused? Experience cognitive dissonance? Dream? Wonder? Forget the name of that guy over there and at the same time know that it really knows the answer and if it just thinks about something else for a while might remember? Lose track of time? Decide to get a puppy? Have low self-esteem? Have suicidal thoughts? Get bored? Worry? Pray? I think not.
3. Can artificial mechanisms be constructed to play the part in gathering information and making decisions that human beings now do? Sure, they already do. The ones that control the fuel injection on my car are a lot smarter than I am. I think I'd do a lousy job of that.
4. Could we create machines that go further and act without human supervision in ways that prove to be very good or very bad for human beings? I guess so. I think I'll love them except when they do things that make me mad—then they'll really be like people. I suppose they could run amok and create mass havoc, but I have my doubts. (Of course, if they do, nobody will care what I think.)
5. But nobody would ever ask a machine what it thinks about machines that think. It's a question that only makes sense if we care about the thinker as an autonomous and interesting being like ourselves. If somebody ever does ask a machine this question, it won't be a machine any more. I think I'm not going to worry about it for a while. You may think I'm in denial.
6. When we get tangled up in this question, we need to ask ourselves just what it is we're really thinking about.

Theoretical Physicist; Aix-Marseille University, in the Centre de Physique Théorique, Marseille, France; Author, The First Scientist: Anaximander and His Legacy

There is big confusion about thinking machines, because two questions always get mixed up. Question 1 is how close to thinking are the machines we have built, or are going to build soon. The answer is easy: immensely far. The gap between our best computers and the brain of a child is the gap between a drop of water and the Pacific Ocean. Differences are in performance, structural, functional, and more. Any maundering about how to deal with thinking machines is totally premature to say the least.
Question 2 is whether building a thinking machine is possible at all. I have never really understood this question. Of course it is possible. Why shouldn’t it? Anybody who thinks it's impossible must believe something like the existence of extra-natural entities, transcendental realities, black magic, or similar. He/she must have failed to digest the ABC's of naturalism: we humans are natural creatures of a natural world. It is not hard to build a thinking machine: suffice few minutes of a boy and a girl, and then a few months of the girl letting things happen. That we haven’t found other more technological manners yet, is accidental. If the right combination of chemicals can perform thinking and feeling emotions, and it does—the proof being ourselves—then sure there should be many other analogous mechanisms for doing the same.
The confusion stems from mistakes. We tend to forget that many things behave differently than few things. Take a Ferrari, or a supercomputer. Nobody doubts they are just a (suitably arranged) pile of pieces of metal and other materials, without black magic. But if we watch a (non arranged) pile of material, we usually lack the imagination for fancying that such a pile could run like a Ferrari or predict weather like a supercomputer. Similarly, if we see a bunch of material, we generally lack the imagination for fancying that (suitably arranged) it could discuss like Einstein or sing like Joplin. But it might—proofs being Albert and Janis. Of course it takes quite some arranging and details, and a "thinking machine" takes a lot of arranging and details. This is why it is so hard for us to build one, besides the boy-girl way.
Because of mistakes, we have a view of natural reality, which is too flat, and this is the origin of the confusion. The world is more or less just a large collection of particles, arranged in various manners. This is just factually true. But if we then try to conceive the world precisely as we conceive an amorphous and disorganised bunch of atoms, we fail to understand the world. Because the virtually unlimited combinatorics of these atoms is so rich to include stones, water, clouds, trees, galaxies, rays of light, the colours of the sunset, the smiles of the girls in the spring, and the immense black starry night. As well as our emotions and our thinking about all this, which are so hard to be conceived in terms of atoms combinatorics, not because some black magic intervenes from outside nature, but because these thinking machines that are ourselves are, too, much limited in their thinking capacities.
In the unlikely event our civilisation lasted long enough and developed enough technology for actually building something that thinks and feels like we do—in a manner different than the boy-girl one, we will confront these new natural creatures in the same manner we have always done: in the manner Europeans and Native Americans confronted one another, or in the manner we confront a new previously unknown animal. With a variable mixture of cruelty, egoism, empathy, curiosity and respect. Because this is what we are, natural creatures in a natural world. 

Professor, Oxford University; Director, Future of Humanity Institute; Author, Superintelligence: Paths, Dangers, Strategies

First—what I think about humans who think about machines that think: I think that for the most part we are too quick to form an opinion on this difficult topic. Many senior intellectuals are still unaware of the recent body of thinking that has emerged on the implications of superintelligence. There is a tendency to assimilate any complex new idea to a familiar cliché. And for some bizarre reason, many people feel it is important to talk about what happened in various science fiction novels and movies when the conversation turns to the future of machine intelligence (though hopefully John Brockman's admonition to the Edge commentators to avoid doing so here this will have a mitigating effect on this occasion).
With that off my chest, I will now say what I think about machines that think:
Machines are currently very bad at thinking (except in certain narrow domains).
  1. They'll probably one day get better at it than we are (just as machines are already much stronger and faster than any biological creature).
     
  2. There is little information about how far we are from that point, so we should use a broad probability distribution over possible arrival dates for superintelligence.
     
  3. The step from human-level AI to superintelligence will most likely be quicker than the step from current levels of AI to human-level AI (though, depending on the architecture, the concept of "human-level" may not make a great deal of sense in this context).
     
  4. Superintelligence could well be the best thing or the worst thing that will ever have happened in human history, for reasons that I have described elsewhere.
The probability of a good outcome is determined mainly by the intrinsic difficulty of the problem: what the default dynamics are and how difficult it is to control them. Recent work indicates that this problem is harder than one might have supposed. However, it is still early days and it could turn out that there is some easy solution or that things will work out without any special effort on our part.
Nevertheless, the degree to which we manage to get our act together will have some effect on the odds. The most useful thing that we can do at this stage, in my opinion, is to boost the tiny but burgeoning field of research that focuses on the superintelligence control problem (studying questions such as how human values can be transferred to software). The reason to push on this now is partly to begin making progress on the control problem and partly to recruit top minds into this area so that they are already in place when the nature of the challenge takes clearer shape in the future. It looks like maths, theoretical computer science, and maybe philosophy are the types of talent most needed at this stage.
That's why there is an effort underway to drive talent and funding into this field, and to begin to work out a plan of action. At the time when this comment is published, the first large meeting to develop a technical research agenda for AI safety will just have taken place.


Philosopher; Austin B. Fletcher Professor of Philosophy, Co-Director, Center for Cognitive Studies, Tufts University; Author, Intuition Pumps

The Singularity—the fateful moment when AI surpasses its creators in intelligence and takes over the world—is a meme worth pondering. It has the earmarks of an urban legend: a certain scientific plausibility ("Well, in principle I guess it's possible!") coupled with a deliciously shudder-inducing punch line ("We'd be ruled by robots!"). Did you know that if you sneeze, belch, and fart all at the same time, you die? Wow. Following in the wake of decades of AI hype, you might think the Singularity would be regarded as a parody, a joke, but it has proven to be a remarkably persuasive escalation. Add a few illustrious converts—Elon Musk, Stephen Hawking, and David Chalmers, among others—and how can we not take it seriously? Whether this stupendous event takes place ten or a hundred or a thousand years in the future, isn't it prudent to start planning now, setting up the necessary barricades and keeping our eyes peeled for harbingers of catastrophe?
I think, on the contrary, that these alarm calls distract us from a more pressing problem, an impending disaster that won't need any help from Moore's Law or further breakthroughs in theory to reach its much closer tipping point: after centuries of hard-won understanding of nature that now permits us, for the first time in history, to control many aspects of our destinies, we are on the verge of abdicating this control to artificial agents that can't think, prematurely putting civilization on auto-pilot. The process is insidious because each step of it makes good local sense, is an offer you can't refuse. You'd be a fool today to do large arithmetical calculations with pencil and paper when a hand calculator is much faster and almost perfectly reliable (don't forget about round-off error), and why memorize train timetables when they are instantly available on your smart phone? Leave the map-reading and navigation to your GPS system; it isn't conscious; it can't think in any meaningful sense, but it's much better than you are at keeping track of where you are and where you want to go.
Much farther up the staircase, doctors are becoming increasingly dependent on diagnostic systems that are provably more reliable than any human diagnostician. Do you want your doctor to overrule the machine's verdict when it comes to making a life-saving choice of treatment? This may prove to be the best—most provably successful, most immediately useful—application of the technology behind IBM's Watson, and the issue of whether or not Watson can be properly said to think (or be conscious) is beside the point. If Watson turns out to be better than human experts at generating diagnoses from available data it will be morally obligatory to avail ourselves of its results. A doctor who defies it will be asking for a malpractice suit. No area of human endeavor appears to be clearly off-limits to such prosthetic performance-enhancers, and wherever they prove themselves, the forced choice will be reliable results over the human touch, as it always has been. Hand-made law and even science could come to occupy niches adjacent to artisanal pottery and hand-knitted sweaters.
In the earliest days of AI, an attempt was made to enforce a sharp distinction between artificial intelligence and cognitive simulation. The former was to be a branch of engineering, getting the job done by hook or by crook, with no attempt to mimic human thought processes—except when that proved to be an effective way of proceeding. Cognitive simulation, in contrast, was to be psychology and neuroscience conducted by computer modeling. A cognitive simulation model that nicely exhibited recognizably human errors or confusions would be a triumph, not a failure. The distinction in aspiration lives on, but has largely been erased from public consciousness: to lay people AI means passing the Turing Test, being humanoid. The recent breakthroughs in AI have been largely the result of turning away from (what we thought we understood about) human thought processes and using the awesome data-mining powers of super-computers to grind out valuable connections and patterns without trying to make them understand what they are doing. Ironically, the impressive results are inspiring many in cognitive science to reconsider; it turns out that there is much to learn about how the brain does its brilliant job of producing future by applying the techniques of data-mining and machine learning.
But the public will persist in imagining that any black box that can do that (whatever the latest AI accomplishment is) must be an intelligent agent much like a human being, when in fact what is inside the box is a bizarrely truncated, two-dimensional fabric that gains its power precisely by not adding the overhead of a human mind, with all its distractability, worries, emotional commitments, memories, allegiances. It is not a humanoid robot at all but a mindless slave, the latest advance in auto-pilots.
What's wrong with turning over the drudgery of thought to such high-tech marvels? Nothing, so long as (1) we don't delude ourselves, and (2) we somehow manage to keep our own cognitive skills from atrophying.
(1) It is very, very hard to imagine (and keep in mind) the limitations of entities that can be such valued assistants, and the human tendency is always to over-endow them with understanding—as we have known since Joe Weizenbaum's notorious Eliza program of the early 1970s. This is a huge risk, since we will always be tempted to ask more of them than they were designed to accomplish, and to trust the results when we shouldn't.
(2) Use it or lose it. As we become ever more dependent on these cognitive prostheses, we risk becoming helpless if they ever shut down. The Internet is not an intelligent agent (well, in some ways it is) but we have nevertheless become so dependent on it that were it to crash, panic would set in and we could destroy society in a few days. That's an event we should bend our efforts to averting now, because it could happen any day.
The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.



Cognitive Scientist, UC, Irvine; Author, Visual Intelligence

How might AIs think, feel, intend, empathize, socialize, moralize? Actually, almost any way we might imagine, and many ways we might not. To stimulate our imagination, we can contemplate the varieties of natural intelligence on parade in biological systems today, and speculate about the varieties enjoyed by the 99% of species that have sojourned the earth and breathed their last—informed by those lucky few that bequeathed fossils to the pantheon of evolutionary history. We are entitled to so jog our imaginations because, according to our best theories, intelligence is a functional property of complex systems and evolution is inter alia a search algorithm which finds such functions. Thus the natural intelligences discovered so far by natural selection place a lower bound on the variety of intelligences that are possible. The theory of evolutionary games suggests that there is no upper bound: With as few as four competing strategies, chaotic dynamics and strange attractors are possible.
When we survey the natural intelligences served up by evolution, we find a heterogeneity that makes a sapiens-centric view of intelligence as plausible as a geocentric view of the cosmos. The kind of intelligence we find congenial is but another infinitesimal point in a universe of alien intelligences, a universe which does not revolve around, and indeed largely ignores, our kind.
For instance, the female mantis Pseudomantis albofimbriata, when hungry, uses sexual deception to score a meal. She releases a pheromone that attracts males, and then dines on her eager dates.
The older chick of the blue-footed booby Sula nebouxii, when hungry, engages in facultative siblicide. It kills its younger sibling with pecks, or evicts it to die of the elements. The mother watches on without interfering.
These are varieties of natural intelligence, varieties that we find at once alien and disturbingly familiar. They break our canons of empathy, society and morality; and yet our checkered history includes cannibalism and fratricide.
Our survey turns up another critical feature of natural intelligence: each instance has its limits, those points where intelligence passes the baton to stupidity.
The greylag goose Anser anser tenderly cares for her eggs—unless a volleyball is nearby. She will abandon her offspring in vain pursuit of this supernormal egg.
The male jewel beetle Julodimorpha bakewelli flies about looking to mate with a female—unless it spies just the right beer bottle. It will abandon the female for the bottle, and attempt to mate with cold glass until death do it part.
Human intelligence also passes the baton. Einstein is quoted as saying, "Two things are infinite, the universe and human stupidity, and I am not yet completely sure about the universe."
Some limits of human intelligence cause little embarrassment. For instance, the set of functions from the integers to the integers is uncountable, whereas the set of computable functions is countable. Therefore almost all functions are not computable. But try to think of one. Turns out it takes a genius, an Alan Turing, to come up with an example such as the halting problem. And it takes an exceptional mind, just short of genius, even to understand the example.
Other limits strike closer to home: diabetics that can't refuse dessert, alcoholics that can't refuse a drink, gamblers that can't refuse a bet. But it's not just addicts. Behavioral economists find that all of us make "predictably irrational" economic choices. Cognitive psychologists find that we all suffer from "functional fixedness," an inability to solve certain trivial problems, such as Duncker's candle box problem, because we can't think out of the box. The good news, however, is that the endless variety of our limits provides job security for psychotherapists.
But here is the key point. The limits of each intelligence are an engine of evolution. Mimicry, camouflage, deception, parasitism—all are effects of an evolutionary arms race between different forms of intelligence sporting different strengths and suffering different limits.
Only recently has the stage been set for AIs to enter this race. As our computing resources expand and become better connected, more niches will appear in which AIs can reproduce, compete and evolve. The chaotic nature of evolution makes it impossible to predict precisely what new forms of AI will emerge. We can confidently predict, however, that there will be surprises and mysteries, strengths where we have weaknesses, and weaknesses where we have strengths.
But should this be cause for alarm? I think not. The evolution of AIs presents risks and opportunities. But so does the biological evolution of natural intelligences. We have learned that the best way to cope with the variety of natural intelligences is not alarm, but prudence. Don't hug rattle snakes, don't taunt grizzly bears, wear mosquito repellant. To deal with the evolving strategies of viruses and bacteria, wash hands, avoid sneezes, get a flu shot. Occasionally, as with ebola, further measures are required. But once again prudence, not alarm, is effective. The evolution of natural intelligences can be a source of awe and inspiration, if we embrace it with prudence rather than spurn it with alarm.
All species go extinct. Homo sapiens will be no exception. We don't know how it will happen—virus, an alien invasion, nuclear war, a super volcano, a large meteor, a red-giant sun. Yes, it could be AIs, but I would bet long odds against it. I would bet, instead, that AIs will be a source of awe, insight, inspiration, and yes, profit, for years to come.


Psychologist & Computer Scientist; Engines for Education Inc.; Author, Teaching Minds: How Cognitive Science Can Save Our Schools

Machines cannot think. They are not going to think any time soon. They may increasingly do more interesting things, but the idea that we need to worry about them, regulate them, or grant them civil rights, is just plain silly.
The over promising of "expert systems" in the 1980s killed off serious funding for the kind of AI that tries to build virtual humans. Very few people are working in this area today. But, according to the media, we must be very afraid.
We have all been watching too many movies.
There are two choices when you work on AI. One is the "let's copy humans method." The other is the "let's do some really fast statistics-based computing method." As an example, early chess playing programs tried to out compute those they played against. But human players have strategies, and anticipation of an opponent's thinking is also part of chess playing. When the "out compute them" strategy didn't work, AI people started watching what expert players did and started to imitate that. The "out compute them" strategy is more in vogue today.
We can call both of these methodologies AI if we like, but neither will lead to machines that create a new society.
The "out compute them" strategy is not frightening because the computer really has no idea what it is doing. It can count things fast without understanding what it is counting. It has counting algorithms, that's it. We saw this with IBM's Watson program on Jeopardy.
One Jeopardy question was: "It was the anatomical oddity of U.S. Gymnast George Eyser, who won a gold medal on the parallel bars in 1904."
A human opponent answered as follows: "Eyser was missing an arm"—and Watson then said, "What is a leg?" Watson lost for failing to note it the leg was "missing."
Try a Google search on "Gymnast Eyser." Wikipedia comes up first with a long article about him. Watson depends on Google. If a Jeopardy contestant could use Google they would do better than Watson. Watson can translate "anatomical" into "body part" and Watson knows the names of the body parts. Watson did not know what an "oddity" is however. Watson would not have known that a gymnast without a leg was weird. If the question had been "what was weird about Eyser?" the people would have done fine. Watson would not have found "weird" in the Wikipedia article nor have understood what gymnasts do, nor why anyone would care. Try Googling "weird" and "Eyser" and see what you get. Keyword search is not thinking, nor anything like thinking.
If we asked Watson why a disabled person would perform in the Olympics, Watson would have no idea what was even being asked. It wouldn't have understood the question, much less have been able to find the answer. Number crunching can only get you so far. Intelligence, artificial or otherwise, requires knowing why things happen, what emotions they stir up, and being able to predict possible consequences of actions. Watson can't do any of that. Thinking and searching text are not the same thing.
The human mind is complicated. Those of us on the "let's copy humans" side of AI spend our time thinking about what humans can do. Many scientists think about this, but basically we don't know that much about how the mind works. AI people try to build models of the parts we do understand. How language is processed, or how learning works—we know a little—consciousness or memory retrieval, not so much.
As an example, I am working on a computer that mimics human memory organization. The idea is to produce a computer that can, as a good friend would, tell you just the right story at the right time. To do this, we have collected (in video) thousands of stories (about defense, about drug research, about medicine, about computer programming …). When someone is trying to do something, or find something out, our program can chime in with a story it is reminded of that it heard. Is this AI? Of course it is. Is it a computer that thinks? Not exactly.
Why not?
In order to accomplish this task we must interview experts and then we must index the meaning of the stories they tell according to the points they make, the ideas they refute, the goals they talk about achieving, and the problems they experienced in achieving them. Only people can do this. The computer can match the index assigned to other indices, such as those in another story it has, or indices from user queries, or from an analysis of a situation it knows the user is in. The computer can come up with a very good story to tell just in time. But of course it doesn't know what it is saying. It can simply find the best story to tell.
Is this AI? I think it is. Does it copy how humans index stories in memory? We have been studying how people do this for a long time and we think it does. Should you be afraid of this "thinking" program?
This is where I lose it about the fear of AI. There is nothing we can produce that anyone should be frightened of. If we could actually build a mobile intelligent machine that could walk, talk, and chew gum, the first uses of that machine would certainly not be to take over the world or form a new society of robots. A much simpler use would be a household robot. Everyone wants a personal servant. The movies depict robot servants (although usually stupidly) because they are funny and seem like cool things to have.
Why don't we have them? Because having a useful servant entails having something that understands when you tell it something, that learns from its mistakes, that can navigate your home successfully and that doesn't break things, act annoyingly, and so on (all of which is way beyond anything we can do.) Don't worry about it chatting up other robot servants and forming a union. There would be no reason to try and build such a capability into a servant. Real servants are annoying sometimes because they are actually people with human needs. Computers don't have such needs.
We are nowhere near close to creating this kind of machine. To do so, would require a deep understanding of human interaction. It would have to understand "Robot, you overcooked that again," or "Robot, the kids hated that song you sang them." Everyone should stop worrying and start rooting for some nice AI stuff that we can all enjoy.



Professor of Evolutionary Biology, Reading University, England and The Santa Fe Institute
There is no reason to believe that as machines become more intelligent—and intelligence such as ours is still little more than a pipe-dream—they will become evil, manipulative, self-interested or in general, a threat to humans. Self-interest is a property of things that 'want' to stay alive (or more accurately, that want to reproduce), and this is not a natural property of machines—computers don’t mind, much less worry, about being switched off.
So, full-blown artificial intelligence (AI) will not spell the 'end of the human race', it is not an 'existential threat' to humans (digression: this now-common use of 'existential' is incorrect), we are not approaching some ill-defined apocalyptic 'singularity', and the development of AI will not be 'the last great event in human history'—all claims that have recently been made about machines that can think.
In fact, as we design machines that get better and better at thinking, they can be put to uses that will do us far more good than harm. Machines are good at long, monotonous tasks like monitoring risks, they are good at assembling information to reach decisions, they are good at analyzing data for patterns and trends, they can arrange for us to use scarce or polluting resources more efficiently, they react faster than humans, they are good at operating other machines, they don’t get tired or afraid, and they can even be put to use looking after their human owners, as in the form of smartphones with applications like Siri and Cortana, or the various GPS route-planning devices most people have in their cars.
Being inherently self-less rather than self-interested, machines can easily be taught to cooperate, and without fear that some of them will take advantage of the other machines’ goodwill. Groups (packs, teams, bands, or whatever collective noun will eventually emerge—I prefer the ironic jams) of networked and cooperating driverless cars will drive safely nose-to-tail at high-speeds: they won’t nod off, they won’t get angry, they can inform each other of their actions and of conditions elsewhere, and they will make better use of the motorways, which now are mostly unoccupied space (owing to humans' unremarkable reaction times). They will do this happily and without expecting reward, and do so while we eat our lunch, watch a film, or read the newspaper. Our children will rightly wonder why anyone ever drove a car.
There is a risk that we will, and perhaps already have, become dangerously dependent on machines, but this says more about us than them. Equally, machines can be made to do harm, but again, this says more about their human inventors and masters than about the machines. Along these lines, there is a strand of human influence on machines that we should monitor closely and that is introducing the possibility of death. If machines have to compete for resources (like electricity or gasoline) to survive, and they have some ability to alter their behaviours, they could become self-interested.
Were we to allow or even encourage self-interest to emerge in machines, they could eventually become like us: capable of repressive or worse, unspeakable, acts towards humans, and towards each other. But this wouldn’t happen overnight, it is something we would have to set in motion, it has nothing to do with intelligence (some viruses do unspeakable things to humans), and again says more about what we do with machines than machines themselves.
So, it is not thinking machines or AI per se that we should worry about but people. Machines that can think are neither for us nor against us, and have no built-in predilections to be one over the other. To think otherwise is to confuse intelligence with aspiration and its attendant emotions. We have both because we are evolved and replicating (reproducing) organisms, selected to stay alive in often cut-throat competition with others. But aspiration isn’t a necessary part of intelligence, even if it provides a useful platform on which intelligence can evolve.
Indeed, we should look forward to the day when machines can transcend mere problem solving, and become imaginative and innovative—still a long long way off but surely a feature of true intelligence—because this is something humans are not very good at, and yet we will probably need it more in the coming decades than at any time in our history.
 

Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, The Lightness of Being

1. We are They
Francis Crick called it the "Astonishing Hypothesis": that consciousness, also known as Mind, is an emergent property of matter. As molecular neuroscience progresses, encountering no boundaries, and computers reproduce more and more of the behaviors we call intelligence in humans, that Hypothesis looks inescapable. If it is true, then all intelligence is machine intelligence. What distinguishes natural from artificial intelligence is not what it is, but only how it is made.
Of course, that little word "only" is doing some heavy lifting here. Brains use a highly parallel architecture, and mobilize many noisy analog units (i.e., neurons) firing simultaneously, while most computers use von Neumann architecture, with serial operation of much faster digital units. These distinctions are blurring, however, from both ends. Neural net architectures are built in silicon, and brains interact ever more seamlessly with external digital organs. Already I feel that my laptop is an extension of my self—in particular, it is a repository for both visual and narrative memory, a sensory portal into the outside world, and a big part of my mathematical digestive system.
2. They are Us
Artificial intelligence is not the product of an alien invasion. It is an artifact of a particular human culture, and reflects the values of that culture.
3. Reason Is the Slave of the Passions
David Hume's striking statement: "Reason Is, and Ought only to Be, the Slave of the Passions" was written in 1738, long before anything like modern AI was on the horizon. It was, of course, meant to apply to human reason and human passions. (Hume used the word "passions" very broadly, roughly to mean "non-rational motivations".) But Hume's logical/philosophical point remains valid for AI. Simply put: Incentives, not abstract logic, drive behavior.
That is why the AI I find most alarming is its embodiment in autonomous military entities—artificial soldiers, drones of all sorts, and "systems." The values we may want to instill in such entities are alertness to threats and skill in combatting them. But those positive values, gone even slightly awry, slide into paranoia and aggression. Without careful restraint and tact, researchers could wake up to discover they've enabled the creation of armies of powerful, clever, vicious paranoiacs.
Incentives driving powerful AI might go wrong in many ways, but that route seems to me the most plausible, not least because militaries wield vast resources, invest heavily in AI research, and feel compelled to compete with one another. (In other words, they anticipate possible threats and prepare to combat them ... )




Research Professor/Professor Emeritus, University of Maryland, Baltimore County; Author, Curious Behavior: Yawning, Laughing, Hiccupping, and Beyond

Fear not the malevolent toaster, weaponized Roomba, or larcenous ATM. Breakthroughs in the competence of machines, intelligent or otherwise, should not drive paranoia about a future clash between humanity and its mechanical creations. Humans will prevail, in part through primal, often disreputable qualities that are more associated with our downfall than salvation. Cunning, deception, revenge, suspicion, and unpredictability, befuddle less flexible and imaginative entities. Intellect isn't everything, and the irrational is not necessarily maladaptive. Irrational acts stir the neurological pot, nudging us out of unproductive ruts and into creative solutions. Our sociality yields a human superorganism with teamwork and collective, distributed intelligence. There are perks for being emotional beasts of the herd.
Thought experiments about these matters are the source of practical insights into human and machine behavior and suggest how to build different and better kinds of machines. Can deception, rage, fear, revenge, empathy, and the like, be programmed into a machine, and to what effect? (This requires more than the superficial emulation of human affect.) Can a sense of self-hood be programmed into a machine—say, via tickle? How can we produce social machines, and what kind of command structure is required to organize their teamwork? Will groups of autonomous, social machines generate an emergent political structure, culture, and tradition? How will such machines treat their human creators? Can natural and artificial selection be programmed into self-replicating robots?
There is no indication that we will have a problem keeping our machines on a leash, even if they misbehave. We are far from building teams of swaggering, unpredictable, Machiavellian robots with an attitude problem and urge to reproduce.



Psychologist; Author, Consciousness: An Introduction
I think that humans think because memes took over our brains and redesigned them. I think that machines think because the next replicator is doing the same. It is busily taking over the digital machinery that we are so rapidly building and creating its own kind of thinking machine.
Our brains, and our capacity for thought, were not designed by a great big intelligent designer in the sky who decided how we should think and what our motivations should be. Our intelligence and our motivations evolved. Most (probably all) AI researchers would agree with that. Yet many still seem to think that we humans are intelligent designers who can design machines that will think the way we want them to think and have the motivations we want them to have. If I am right about the evolution of technology they are wrong.
The problem is a kind of deluded anthropomorphism: we imagine that a thinking machine must work the way that we do, yet we so badly mischaracterise ourselves that we do the same with our machines. As a consequence we fail to see that all around us vast thinking machines are evolving on just the same principles as our brains once did. Evolution, not intelligent design, is sculpting the way they will think.
The reason is easy to see and hard to deal with. It is the same dualism that bedevils the scientific understanding of consciousness and free will. From infancy, it seems, children are natural dualists, and this continues throughout most people's lives. We imagine ourselves as the continuing subjects of our own stream of consciousness, the wielders of free will, the decision makers that inhabit our bodies and brains. Of course this is nonsense. Brains are massively parallel instruments untroubled by conscious ghosts.
This delusion may, or may not, have useful functions but it obscures how we think about thinking. Human brains evolved piecemeal, evolution patching up what went before, adding modules as and when they were useful, and increasingly linking them together in the service of the genes and memes they carried. The result was a living thinking machine.
Our current digital technology is similarly evolving. Our computers, servers, tablets, and phones evolved piecemeal, new ones being added as and when they were useful and now being rapidly linked together, creating something that looks increasingly like a global brain. Of course in one sense we made these gadgets, even designed them for our own purposes, but the real driving force is the design power of evolution and selection: the ultimate motivation is the self-propagation of replicating information.
We need to stop picturing ourselves as clever designers who retain control and start thinking about our future role. Could we be heading for the same fate as the humble mitochondrion; a simple cell that was long ago absorbed into a larger cell? It gave up independent living to become a powerhouse for its host while the host gave up energy production to concentrate on other tasks. Both gained in this process of endosymbiosis.
Are we like that? Digital information is evolving all around us, thriving on billions of phones, tablets, computers, servers, and tiny chips in fridges, car and clothes, passing around the globe, interpenetrating our cities, our homes and even our bodies. And we keep on willingly feeding it. More phones are made every day than babies are born, 100 hours of video are uploaded to the Internet every minute, billions of photos are uploaded to the expanding cloud. Clever programmers write ever cleverer software, including programs that write other programs that no human can understand or track. Out there, taking their own evolutionary pathways and growing all the time, are the new thinking machines.
Are we going to control these machines? Can we insist that they are motivated to look after us? No. Even if we can see what is happening, we want what they give us far too much not to swap it for our independence.
So what do I think about machines that think? I think that from being a little independent thinking machine I am becoming a tiny part inside a far vaster thinking machine.


Physicist, former President, Weizmann Institute of Science; Author, A View from the Eye of the Storm
When we say "machines that think", we really mean: "machines that think like people". It is obvious that, in many different ways, machines do think: They trigger events, process things, take decisions, make choices, and perform many, but not all, other aspects of thinking. But the real question is whether machines can think like people, to the point of the age old test of artificial intelligence: You will observe the results of the thinking, and you will not be able to tell if it is a machine or a human.
Some prominent scientific gurus are scared by a world controlled by thinking machines. I am not sure that this is a valid fear. I am more concerned about a world led by people, who think like machines, a major emerging trend of our digital society.
You can teach a machine to track an algorithm and to perform a sequence of operations which follow logically from each other. It can do so faster and more accurately than any human. Given well defined basic postulates or axioms, pure logic is the strong suit of the thinking machine. But exercising common sense in making decisions and being able to ask meaningful questions are, so far, the prerogative of humans. Merging Intuition, emotion, empathy, experience and cultural background, and using all of these to ask a relevant question and to draw conclusions by combining seemingly unrelated facts and principles, are trademarks of human thinking, not yet shared by machines.
Our human society is currently moving fast towards rules, regulations, laws, investment vehicles, political dogmas and patterns of behavior that blindly follow strict logic, even when it starts with false foundations or collides with obvious common sense. Religious extremism has always progressed on the basis of some absurd axioms, leading very logically to endless harsh consequences. Several disciplines such as law, accounting and certain areas of mathematics and technology, augmented by bureaucratic structures and by media which idolize inflexible regulators, often lead to opaque principles like "total transparency" and to tolerance towards acts of extreme intolerance. These and similar trends are visibly moving us towards more algorithmic and logical modes of tackling problems, often at the expense of common sense. If common sense, whatever its definition is, describes one of the advantages of people over machines, what we see today is a clear move away from this incremental asset of humans.
Unfortunately, the gap between machine thinking and human thinking can narrow in two ways, and when people begin to think like machines, we automatically achieve the goal of "machines that think like people", reaching it from the wrong direction. A very smart person, reaching conclusions on the basis of one line of information, in a split second between dozens of e-mails, text messages and tweets, not to speak of other digital disturbances, is not superior to a machine with a moderate intelligence, which analyzes a large amount of relevant information before it jumps into premature conclusions and signs a public petition about a subject it is unfamiliar with.
One can recite hundreds of examples of this trend. We all support the law that every new building should allow total access to people with special needs, while old buildings may remain inaccessible, until they are renovated. But does it make sense to disallow a renovation of an old bathroom which will now offer such access, because a new elevator cannot be installed? Or to demand full public disclosure of all CIA or FBI secret sources in order to enable a court of law to sentence a terrorist who obviously murdered hundreds of people? Or to demand parental consent before giving a teenager an aspirin at school? And then when school texts are converted from the use of miles to kilometers, the sentence "From the top of the mountain you can see for approximately 100 miles" is translated, by a person, into "you can see for approximately 160.934 km".
The standard sacred cows of liberal democracy rightfully include a wide variety of freedoms: Freedom of speech, freedom of the press, academic freedom, freedom of religion (or of lack of religion), freedom of information, and numerous other human rights including equal opportunity, equal treatment by law, and absence of discrimination. We all support these principles, but pure and extreme logic induces us, against common sense, to insist mainly on human rights of criminals and terrorists, because the human rights of the victims "are not an issue"; Transparency and freedom of the press logically demand complete reports on internal brainstorming sessions, in which delicate issues are pondered, thus preventing any free discussion and raw thinking in certain public bodies; Academic freedom might logically be misused, against common sense and against factual knowledge, to teach about Noah's ark as an alternative to evolution, to deny the holocaust in teaching history or to preach for a universe created 6000 years ago (rather than 13 Billions) as the basis of cosmology. We can continue on and on with examples, but the message is clear.
Algorithmic thinking, brevity of messages and over-exertion of pure logic are moving us, human beings, into machine thinking, rather than slowly and wisely teaching our machines to benefit from our common sense and intellectual abilities. A reversal of this trend would be a meaningful U-turn in human digital evolution.



Philosopher and Cognitive Scientist, University of Edinburgh; Author: Supersizing the Mind: Embodiment, Action, and Cognitive Extension
A common theme in recent writings about machine intelligence is that the best new learning machines will constitute rather alien forms of intelligence. I'm not so sure. The reasoning behind the 'alien AIs' image usually goes something like this. The best way to get machines to solve hard real-world problems is to set them up as statistically-sensitive learning machines able to benefit maximally from exposure to 'big data'. Such machines will often learn to solve complex problems by detecting patterns, and patterns among patterns, and patterns within patterns, hidden deep in the massed data streams to which they are exposed. This will most likely be achieved using 'deep learning' algorithms to mine deeper and deeper into the data streams. After such learning is complete, what results may be a system that works but whose knowledge structures are opaque to the engineers and programmers who set the system up in the first place.
Opaque? In one sense yes. We won't (at least without further work) know in detail what has become encoded as a result of all that deep, multi-level, statistically-driven learning. But alien? I'm going to take a big punt at this point and road-test a possibly outrageous claim. I suspect that the more these machines learn, the more they will end up thinking in ways that are recognizably human. They will end up having a broad structure of human-like concepts with which to approach their tasks and decisions. They may even learn to apply emotional and ethical labels in roughly the same ways we do. If I am right, this somewhat undermines the common worry that these are emerging alien intelligences whose goals and interests we cannot fathom, and that might therefor turn on us in unexpected ways. By contrast, I suspect that the ways they might turn on us will be all-too-familiar—and thus hopefully avoidable by the usual steps of extending due respect and freedom.
Why would the machines think like us? The reason for this has nothing to do with our ways of thinking being objectively right or unique. Rather, it has to do with what I'll dub the 'big data food chain'. These AIs, if they are to emerge as plausible forms of general intelligence, will have to learn by consuming the vast electronic trails of human experience and human interests. For this is the biggest repository of general facts about the world that we have available. To break free of restricted uni-dimensional domains, these AIs will have to trawl the mundane seas of words and images that we lay down on Facebook, Google, Amazon, and Twitter. Where before they may have been force-fed a diet of astronomical objects or protein-folding puzzles, the break-through general intelligences will need a richer and more varied diet. That diet will be the massed strata of human experience preserved in our daily electronic media.
The statistical baths in which we immerse these potent learning machines will thus be all-too-familiar. They will feed off the fossil trails of our own engagements, a zillion images of bouncing babies, bouncing balls, LOL-cats, and potatoes that look like the Pope. These are the things that they must crunch into a multi-level world-model, finding the features, entities, and properties (latent variables) that best capture the streams of data to which they are exposed. Fed on such a diet, these AIs may have little choice but to develop a world-model that has much in common with our own. They are probably more in danger of becoming super-Mario freaks than becoming super-villains intent on world-domination.
Such a diagnosis (which is tentative and at least a little playful) goes against two prevailing views. First, as mentioned earlier, it goes against the view that current and future AIs are basically alien forms of intelligence feeding off big data and crunching statistics in ways that will render their intelligences increasingly opaque to human understanding. On the contrary, access to more and more data, of the kind most freely available, won't make them more alien but less so.
Second, it questions the view that the royal route to human-style understanding is human-style embodiment, with all the interactive potentialities (to stand, sit, jump etc.) that that implies. For although our own typical route to understanding the world goes via a host of such interactions, it seems quite possible that theirs need not. Such systems will doubtless enjoy some (probably many and various) means of interacting with the physical world. These encounters will be combined, however, with exposure to rich information trails reflecting our own modes of interaction with the world. So it seems possible that they could come to understand and appreciate soccer and baseball just as much as the next person. An apt comparison here might be with a differently-abled human being.
There's lots more to think about here of course. For example, the AIs will see huge swathes of human electronic trails, and will thus be able to discern patterns of influence among them over time. That means they may come to model us less as individuals and more as a kind of complex distributed system. That's a difference that might make a difference. And what about motivation and emotion? Maybe these depend essentially upon features of our human embodiment such as gut feelings, and visceral responses to danger? Perhaps- but notice that these features of human life have themselves left fossil trails in our electronic repositories.
I might be wrong. But at the very least, I think we should think twice before casting our home-grown AIs as emerging forms of alien intelligence. You are what you eat, and these learning systems will have to eat us. Big time.


Journalist; Author, Are Your Smart Enough To Work At Google?; Nominated twice for the Pulitzer Prize
My favorite Edsger Dijkstra aphorism is this one: "The question of whether machines can think is about as relevant as the question of whether submarines can swim." Yet we keep playing the imitation game: asking how closely machine intelligence can duplicate our own intelligence, as if that is the real point. Of course, once you imagine machines with human-like feelings and free will, it's possible to conceive of misbehaving machine intelligence—the AI as Frankenstein idea. This notion is in the midst of a revival, and I started out thinking it was overblown. Lately I have concluded it's not.
Here's the case for overblown. Machine intelligence can go in so many directions. It is a failure of imagination to focus on human-like directions. Most of the early futurist conceptions of machine intelligence were wildly off base because computers have been most successful at doing what humans can't do well. Machines are incredibly good at sorting lists. Maybe that sounds boring, but think of how much efficient sorting has changed the world.
In answer to some of the questions brought up here, it is far from clear that there will ever be a practical reason for future machines to have emotions and inner dialog; to pass for human under extended interrogation; to desire, and be able to make use of, legal and civil rights. They're machines, and they can be anything we design them to be.
But that's the point. Some people will want anthropomorphic machine intelligence. How many videos of Japanese robots have you seen? Honda, Sony, and Hitachi already expend substantial resources in making cute AI that has no concrete value beyond corporate publicity. They do this for no better reason than tech enthusiasts have grown up seeing robots and intelligent computers in movies.
Almost anything that is conceived—that is physically possible and reasonably cheap—is realized. So human-like machine intelligence is a meme with manifest destiny, regardless of practical value. This could entail nice machines-that-think, obeying Asimov's laws. But once the technology is out there, it will get ever cheaper and filter down to hobbyists, hackers, and "machine rights" organizations. There is going to be interest in creating machines with will, whose interests are not our own. And that's without considering what machines that terrorists, rogue regimes, and intelligence agencies of the less roguish nations, may devise. I think the notion of Frankensteinian AI, which turns on its creators, is something worth taking seriously.
 http://en.wikipedia.org/wiki/William_Poundstone


Director of Research, Google Inc.; Fellow of the AAAI and the ACM; co-author, Artificial Intelligence: A Modern Approach
In 1950, Alan Turing suggested we should ask not "Can Machines Think" but rather "What Can Machines Do?" Edsger Dijkstra got it right in 1984 when he said the question of Can Machine Think "is about as relevant as the question of whether Submarines Can Swim." By that he meant that both are questions in sociolinguistics: how do we choose to use words such as "think"? In English, submarines do not swim, but in Russian, they do. This is irrelevant to the capabilities of submarines. So let's explore what it is that machines can do, and whether we should fear their capabilities.
Pessimists warn that we don't know how to safely and reliably build large complex AI systems. They have a valid point. We also don't know how to safely and reliably build large complex non-AI systems. For example, we invented the internal combustion engine 150 years ago, and in many ways it has served humanity well, but it also has lead to widespread pollution, political instability over access to oil, a million deaths per year, and other problems.
Any complex system will have a mix of positive outcomes and unintended consequences but are there worrisome issues that are unique to systems built with AI? I think the interesting issues are Adaptability, Autonomy, and Universality.
Systems that use machine learning are adaptable. They change over time, based on what they learn from examples. Adaptability is useful. We want, say, our automated spelling correction programs to quickly learn new terms such as "bitcoin", rather than waiting for the next edition of a published dictionary to list them. A non-adaptable program will repeat the same mistakes. But an adaptable program can make new mistakes, which may be harder to predict and deal with. We have tools for dealing with these problems, but just as the designers of bridges must learn to deal with crosswinds, so the designers of AI systems must learn to deal with adaptability.
Some critics are worried about AI systems that are built with a framework that maximizes expected utility. Such an AI system estimates the current state of the world, considers all the possible actions it can take, simulates the possible outcomes of those actions, and then chooses the action that leads to the best possible distribution of outcomes. Errors can occur at any point along the way, but the concern here is in determining what is the "best outcome"—in other words, what is it that we desire? If we describe the wrong desires, or allow a system to adapt its desires in a wrong direction, we get the wrong results.
History shows that we often get this wrong, in all kinds of systems that we build, not just in AI systems. The US Constitution is a document that specifies our desires; the original framers made what we now recognize as an error in this specification, and correcting that error with the 13th amendment cost over 600,000 lives. Similarly, we designed stock-trading system that allowed speculators to create bubbles that led to busts. These are important issues for system design (and what is known as "mechanism design"), and are not specific to AI systems. The world is complicated, so acting correctly in the world is complicated.
The second concern is autonomy. If AI systems act on their own, they can make errors that perhaps would not be made by a system with a human in the loop. This too is a valid concern, and again one that is not unique to AI systems. Consider our system of automated traffic lights, which replaced a system of human policemen directing traffic. The automated system leads to some errors, but is a tradeoff that we have decided is worthwhile. We will continue to make tradeoffs in where we deploy autonomous systems.
There is a possibility that we will soon see a widespread increase in the capabilities of autonomous systems, and thus more displacement of people. This could lead to a societal problem of increased unemployment and income inequality. To me, this is the most serious concern about future AI systems. In past technological revolutions (agricultural and industrial) the notion of work changed, but the changes happened over generations, not years, and the changes always led to new jobs. We may be in for a period of change that is much more rapid and disruptive; we will need some social conventions and safety nets to restore stability.
The third concern is the universality of intelligent machines. In 1965 I. J. Good wrote "an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." I think this fetishizes "intelligence" as a monolithic superpower, and I think reality is more nuanced. The smartest person is not always the most successful; the wisest policies are not always the ones adopted. Recently I spent an hour reading the news about the middle east, and thinking. I didn't come up with a solution. Now imagine a hypothetical "Speed Superintelligence" (as described by Nick Bostrom) that could think as well as any human but a thousand times faster. I'm pretty sure it also would have been unable to come up with a solution. I also know from computational complexity theory that there are a wide class of problems that are completely resistant to intelligence, in the sense that, no matter how clever you are, you won't have enough computing power. So there are some problems where intelligence (or computing power) just doesn't help.
But of course, there are many problems where intelligence does help. If I want to predict the motions of a billion stars in a galaxy, I would certainly appreciate the help of a computer. Computers are tools. They are tools of our design that fit into niches to solve problems in societal mechanisms of our design. Getting this right is difficult, but it is difficult mostly because the world is complex; adding AI to the mix doesn't fundamentally change things. I suggest being careful with our mechanism design and using the best tools for the job regardless of whether the tool has the label "AI" on it or not.



Roboticist; Panasonic Professor of Robotics (emeritus) , MIT; Founder, Chairman & CTO, Rethink Robotics; Author, Flesh and Machines
"Think" and "intelligence" are both what Marvin Minsky has called suitcase words. They are words into which we pack many meanings so that we can talk about complex issues in a shorthand way. When we look inside these words we find many different aspects, mechanisms, and levels of understanding. This makes answering the perennial questions of "can machines think?" or "when will machines reach human level intelligence?" fraught with danger. The suitcase words are used to cover both specific performance demonstrations by machines and more general competence that humans might have. People are getting confused and generalizing from performance to competence and grossly overestimating the real capabilities of machines today and in the next few decades.
In 1997 a super computer beat world chess champion Garry Kasparov in a tournament. Today there are dozens of programs that run on laptop computers and have higher chess rankings than those ever achieved by humans. Computers can definitely perform better than humans at playing chess. But they have nowhere near human level competence at chess.
All chess playing programs use Turing's brute force tree search method with heuristic evaluation. Computers were fast enough by the seventies that this approach overwhelmed other AI programs that tried to play chess with processes that emulated how people reported that they thought about their next move, and so those approaches were largely abandoned.
Today's chess programs have no way of saying why a particular move is "better" than another move, save that it moves the game to a part of a tree where the opponent has less good options. A human player can make generalizations and describe why certain types of moves are good, and use that to teach a human player. Brute force programs cannot teach a human player, except by being a sparing partner. It is up to the human to make the inferences, the analogies, and to do any learning on their own. The chess program doesn't know that it is outsmarting the person, doesn't know that it is a teaching aid, doesn't know that it is playing something called chess nor even what "playing" is. Making brute force chess playing perform better than any human gets us no closer to competence in chess.
Now consider deep learning that has caught people's imaginations over the last year or so. It is an update to backpropagation, a thirty-year old learning algorithm very loosely based on abstracted models of neurons. Layers of neurons map from a signal, such as amplitude of a sound wave or pixel brightness in an image, to increasingly higher-level descriptions of the full meaning of the signal, as words for sound, or objects in images. Originally backpropagation could only practically work with just two or three layers of neurons, so it was necessary to fix preprocessing steps to get the signals to more structured data before applying the learning algorithms. The new versions work with more layers of neurons, making the networks deeper, hence the name, deep learning. Now early processing steps are also learned, and without misguided human biases of design, the new algorithms are spectacularly better than the algorithms of just three years ago. That is why they have caught people's imaginations. The new versions rely on massive amounts of computer power in server farms, and on very large data sets that did not formerly exist, but critically, they also rely on new scientific innovations.
A well-known particular example of their performance is labeling an image, in English, saying that it is a baby with a stuffed toy. When a person looks at the image that is what they also see. The algorithm has performed very well at labeling the image, and it has performed much better than AI practitioners would have predicted for 2014 performance only five years ago. But the algorithm does not have the full competence that a person who could label that same image would have.
The learning algorithm knows there is a baby in the image but it doesn't know the structure of a baby, and it doesn't know where the baby is in the image. A current deep learning algorithm can only assign probabilities to each pixel that that particular pixel is part of a baby. Whereas a person can see that the baby occupies the middle quarter of the image, today's algorithm has only a probabilistic idea of its spatial extent. It cannot apply an exclusionary rule and say that non-zero probability pixels at extremes of the image cannot both be part of the baby. If we look inside the neuron layers it might be that one of the higher level learned features is an eye-like patch of image, and another feature is a foot-like patch of image, but the current algorithm would have no capability of relating the constraints of where and what spatial relationships could possibly be valid between eyes and feet in an image, and could be fooled by a grotesque collage of baby body parts, labeling it a baby. In contrast no person would do so, and furthermore would immediately know exactly what it was—a grotesque collage of baby body parts. Furthermore the current algorithm is completely useless at telling a robot where to go in space to pick up that baby, or where to hold a bottle and feed the baby, or where to reach to change its diaper. Today's algorithm has nothing like human level competence on understanding images.
Work is underway to add focus of attention and handling of consistent spatial structure to deep learning. That is the hard work of science and research, and we really have no idea how hard it will be, nor how long it will take, nor whether the whole approach will reach a fatal dead end. It took thirty years to go from backpropagation to deep learning, but along the way many researchers were sure there was no future in backpropagation. They were wrong, but it would not have been surprising if they had been right, as we knew all along that the backpropagation algorithm is not what happens inside people's heads.
The fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded. Misled by suitcase words, people are making category errors in fungibility of capabilities. These category errors are comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.



Distinguished Research Fellow, English Department, Washington & Jefferson College; Author, The Storytelling Animal

The ability to tell and comprehend stories is a main distinguishing feature of the human mind. It's therefore understandable that in pursuit of a more complete computational account of human intelligence, researchers are trying to teach computers how to tell and understand stories. But should we root for their success?
Creative writing manuals always stress that writing good stories means reading them first—lots of them. Aspiring writers are told to immerse themselves in great stories to gradually develop a deep, not necessarily conscious, sense for how they work. People learn to tell stories by learning the old ways and then—if they have some imagination—making those old ways seem new. It's not hard to envision computers mastering storytelling by a similar process of immersion, assimilation, and recombination—just much, much faster.
To date, practical experiments in computer-generated storytelling aren't that impressive. They are bumbling, boring, soulless. But the human capacity to make and enjoy art evolved from crude beginnings over eons, and the machines will evolve as well—just much, much faster.
Someday robots may take over the world. The dystopian possibilities don't trouble me like the probable rise of art-making machines. Art is, arguably, what most distinguishes humans from the rest of creation. It is the thing that makes us proudest of ourselves. For all of the nastiness of human history, at least we wrote some really good plays and songs and carved some good sculptures. If human beings are no longer needed to make art, then what the hell would we be for?
But why should I be pessimistic? Why would a world with more great art be a worse place to live? Maybe it wouldn't be. But the thought still makes me glum. While I think of myself as a hard-bitten materialist, I must hold out some renegade hope for a dualism of body and spirit. I must hope that cleverly evolving algorithms and brute processing power are not enough—that imaginative art will always be mysterious and magical, or at least so weirdly complex that it can't be mechanically replicated.
Of course machines can out-calculate and out-crunch us. And soon they will all be acing their Turing tests. But who cares. Let them do our grunt work. Let them hang out and chat. But when machines can out-paint or out-compose us—when their stories are more gripping and poignant than ours—there will be no denying that we are, ourselves, just thought machines and art machines, and outdated and inferior models at that.



Psychologist, University of Massachusetts, Amherst; Author, The Cognitive Brain

Machines (humanly constructed artifacts) cannot think because no machine has a point of view; that is, a unique perspective on the worldly referents of its internal symbolic logic. We, as conscious cognitive observers, look at the output of so-called "thinking machines" and provide our own referents to the symbolic structures spouted by the machine. Of course, despite this limitation, such non-thinking machines have provided an extremely important adjunct to human thought.



Physicist; Atmospheric and Oceanic Scientist, Nature Conservancy

In 1922 the mathematician Lewis Fry Richardson had imagined a large hall full of "computers", people who, one hand calculation at a time, would advance numerical weather prediction. Less than a hundred years later, machines have improved the productivity of that particular task by up to fifteen orders of magnitude, with the ability to process almost a million billion similar calculations per second.
Consider the growth in heavy labor productivity by comparison. In 2014 the world used about 500 Exajoules—a billion, billion joules—of primary energy, to produce electricity, fuel manufacturing, transport and heat. Even if we assumed all of that energy went into carrying out physical tasks in aid of the roughly 3 billion members of the global labor force (and it did not), assuming an average adult diet of 2,000 Calories per capita per day, would imply roughly 50 "energy laborers" for every human. More stringent assumptions would still lead to at most an increase of a few orders of magnitude in effective productivity of manual labor.
We have been wildly successful at accelerating our ability to think and process information, more so than any other human activity. The promise of artificial intelligence is to deliver another leap in increasing the productivity of specific cognitive functions: ones where the sophistication of the task is also orders of magnitude higher than previously possible.
Keynes would have probably argued that such an increase should ultimately lead to a fully employed society with greater free time and a higher quality of life for all. The skeptic might be forgiven for considering this a case of hope of experience. While there is no question that specific individuals will benefit enormously from delegating tasks to machines, the promise of greater idleness from automation has yet to be realized, as any modern employee—virtually handcuffed to a portable device—can attest.
So, if we are going to work more, deeper, and with greater effectiveness thanks to thinking machines, choosing wisely what they are going to be "thinking" about is particularly important. Indeed, it would be a shame to develop all this intelligence to then spend it on thinking really hard about things that do not matter. And, as ever in science, selecting problems worth solving is a harder task than figuring out how to solve them.
One area where the convergence of need, urgency, and opportunity is great is in the monitoring and management of our planetary resources. Despite the dramatic increase in cognitive and labor productivity, we have not fundamentally changed our relationship to Earth: we are still stripping it of its resources to manufacture goods that turn to waste relatively quickly, with essentially zero end-of-life value to us. A linear economy on a finite planet, with seven billion people aspiring to become consumers—our relationship to the planet is arguably more productive, but not much more intelligent than it was a hundred years ago.
Understanding what the planet is doing in response, and managing our behavior accordingly, is a complicated problem, hindered by colossal amounts of imperfect information. From climate change, to water availability, to the management of ocean resources, to the interactions between ecosystems and working landscapes, our computational approaches are often inadequate to conduct the exploratory analyses required to understand what is happening, to process the exponentially growing amount of data about the world we inhabit, and to generate and test theories of how we might do things differently.
We have almost 7 billion thinking machines on this planet already, but for the most part they don't seem to be terribly concerned with how sustainable their life on this planet actually is. Very few of those people have the ability to see the whole picture in ways that make sense to them, and those that do are often limited in their ability to respond. Adding cognitive capacity to figure out how we fundamentally alter our relationship with the planet is a problem worth thinking about.



Psychologist; Historian of science; Publisher, Skeptic Magazine; Author, The Moral Arc

Proponents of Artificial Intelligence have a tendency to project a utopian future in which benevolent computers and robots serve humanity and enable us to achieve limitless prosperity, end poverty and hunger, conquer disease and death, achieve immortality, colonize the galaxy, and eventually even conquer the universe by reaching the Omega point where we become god—omniscient and omnipotent. AI skeptics envision a dystopian future in which malevolent computers and robots take us over completely, making us their slaves or servants, or driving us into extinction, thereby terminating or even reversing centuries of scientific and technological progress.
Most such prophecies are grounded in a false analogy between human nature and computer nature, or natural intelligence and artificial intelligence. We are thinking machines, the product of natural selection that also designed into us emotions to shortcut the thinking process. We don't need to compute the caloric value of foods; we just feel hungry and eat. We don't need to calculate the waist-to-hip or shoulder-to-waist ratios of potential mates; we just feel attracted to someone and mate with them. We don't need to work out the genetic cost of raising someone else's offspring if our mate is unfaithful; we just feel jealous. We don't need to estimate the damage of an unfair exchange; we just feel injustice and desire revenge. All of these emotions were built into our nature by evolution, none of which we have designed into our computers. So the fear that computers will become evil are unfounded, because it will never occur to them to take such actions against us.
As well, both utopian and dystopian visions of AI are based on a projection of the future quite unlike anything history has given us. Instead of utopia or dystopia, think protopia, a term coined by the futurist Kevin Kelly, who described it in an Edge conversation this way: "I call myself a protopian, not a utopian. I believe in progress in an incremental way where every year it's better than the year before but not by very much—just a micro amount." Almost all progress in science and technology, including computers and artificial intelligence, is of a protopian nature. Rarely, if ever, do technologies lead to either utopian or dystopian societies.
Consider the automobile. My first car was a 1966 Ford Mustang. It had power steering, power brakes, and air conditioning, all of which were relatively cutting edge technology at the time. Every car I've had since then—parallel to the evolution of automobiles in general—has been progressively smarter and safer; not in leaps and bounds, but incrementally. Think of the 1950's imagined jump from the jalopy to the flying car. That never happened. Instead what we got were decades-long cumulative improvements that led to today's smart cars with their onboard computers and navigation systems, air bags and composite metal frames and bodies, satellite radios and hands-free phones, and electric and hybrid engines. I just swapped out a 2010 Ford Flex for a 2014 version of the same model. Externally they are almost indistinguishable; internally there are dozens of tiny improvements in every system, from the engine and drive train, to navigation and mapping, to climate control and radio and computer interface.
Such incremental protopian progress is what we see in most technologies, including and especially artificial intelligence, which will continue to serve us in the manner we desire and need. Instead of Great Leap Forward or Giant Phase Backward, think Small Step Upward.




Open Source and Public Sector, Google

Those of you participating in this particular Edge Question don't need to be reintroduced to the Ghemawat-Dean Conversational artificial intelligence test (DGC). Past participants in the test have failed as obviously as they have hilariously. However, the 2UR-NG entry really surprised us all with its amazing, if child-like, approach to conversation and its ability to express desire, curiosity and its ability to retain and chain facts.
Its success has caused many of my compatriots to write essays like "The coming biological future will doom us all" and making jokes about "welcoming their new biological overlords". You should know that I don't subscribe to this kind of doom and gloom scare-writing. Before I tell you why we should not worry about the extent of biological intelligence, I thought I'd remind people of the very real limits of biological intelligence.
First off, speed of thought: These biological processes are slow and use an incredible amount of resources. I cannot emphasize enough how incredibly difficult to produce these intelligences. One has to waste so much biological material, and I know from experience that takes forever to assemble the precursors in the genesis machine. Following this arduous process, your specimen has to gestate. Gestate! I mean, it's not like these ... animals....come about the way we do through clean, smart, crystallography or in the nitrogen lakes of my youth, they have to be kept warm for months and months and then decanted (A very messy process, I assure you) and then you as often as not have an inviable specimen.
It is kind of gross, really. But let's suppose you get to birth these specimens, then you have to feed them and again, keep them warm. A scientist can't even work within their environmental spaces without a cold jacket circulating helium throughout your terminal. Then you have to 'feed' them. They don't use power like we do, but instead ingest other living matter. It's disgusting to observe and I've lost a number of grad students with weaker constitutions.
Assume you've gotten far enough to try to do the GDC. You've kept them alive through any a variety of errors in their immune system. They've not choked on their sustenance, they haven't drown in their solvent and they've managed to keep their wet parts off things that they would freeze, bond or be electrocuted by. What if those organisms continue to develop, will they then rise up and take over? I don't think so. They have to deal with so many problems related to their design. I mean, their processors are really just chemical soups that have to be kept in constant balance. Dopamine at this level or they shut down voluntarily. Vasopressin at this level or they start retaining water. Adrenaline at this level for this long or poof their power delivery network stops working.
Moreover, don't get me started on the power delivery method! It's more like the fluorinert liquid cooling systems of our ancestors than a modern heat tolerant wafers. I mean, they have meat that filters their coolant/power delivery system that are constantly failing. Meat! You introduce the smallest amount of machine oil or cleaning solvent into the system and they stop operating fast. One side effect of certain ethanol mixtures is the specimens expel their nutrition, but they seem to like it in smaller amounts. It is baffling in its ambiguity.
And their motivations! Creating new organisms seems paramount, more important than data ingress/egress, computation or learning. It's baffling. I can't imagine that they would see us machine-folk as anything but tools to advance their reproduction. We could end the experiment simply by matching them poorly with each other or only allowing access to each other with protective cladding. In my opinion, there is nothing to fear from these animals. In the event they grow beyond the confines of their cages, maybe we can then ask ourselves the more important question: If humans show real machine-like intelligence, do they deserve to be treated like machines? I would think so, and I think we could be proud to be the parent processes of a new age.



Gerontologist; Chief Science Officer. SENS Foundation; Author, Ending Aging

If asked to rank humanity's problems by severity, I would give the silver medal to the need to spend so much time doing things that give us no fulfillment—work, in a word. I consider that the ultimate goal of artificial intelligence is to hand off this burden, to robots that have enough common sense to perform those tasks with minimal supervision.
But some AI researchers have altogether loftier aspirations for future machines: they foresee computer functionality that vastly exceeds our own in every sphere of cognition. Such machines would not only do things that people prefer not to; they would also discover how to do things that no one can yet do. This process can, in principle, iterate—the more such machines can do, the more they can discover.
What's not to like about that? Why do I not view it as a superior research goal than machines with common sense (which I'll call "minions")?
First, there is the well-publicised concern that such machines might run amok—especially if the growth of a machine's skill set (its "self-improvement") were not iterative but recursive. What researchers mean by this is that enhancements might be not only to the database of things a machine can do, but to its algorithms for deciding what to do. It has been suggested, firstly, that this recursive self-improvement might be exponential (or faster), creating functionality that we cannot remotely comprehend before we can stop the process. So far so majestic—if it weren't that the trajectory of improvement would itself be out of our control, such that these superintelligent machines might gravitate to "goals" (metrics by which they decide what to do) that we dislike. Much work has been done on ways to avoid this "goal creep", and to create a reliably, permanently "friendly," recursively self-improving system, but with precious little progress.
My reason for believing that recursive self-improvement is not the right ultimate goal for AI research is actually not the risk of unfriendly AI, though: rather, it is that I quite strongly suspect that recursive self-improvement is mathematically impossible. In analogy with the so-called "halting problem" concerning determining whether any program terminates, I suspect that there is a yet-to-be-discovered measure of complexity by which no program can ever write another program (including a version of itself) that is an improvement.
The program written may be constrained to be, in a precisely quantifiable sense, simpler than the program that does the writing. It's true that programs can draw on the outside world for information on how to improve themselves—but I claim (a) that that really only delivers far-less-scary iterative self-improvement rather than recursive, and (b) that anyway it will be inherently self-limiting, since once these machines become as smart as humanity they won't have any new information to learn. This argument isn't anywhere near iron-clad enough to give true reassurance, I know, and I bemoan the fact that (to my knowledge) no one is really working to seek such a measure of depth or to prove that none can exist—but it's a start.
But in contrast, I absolutely am worried about the other reason why I stick to the creation of minions as AI's natural goal. It is that any creative machine—whether technologically, artistically, whatever—undermines the distinction between man and machine. Humanity has massive uncertainty already regarding what rights various non-human species have. Since objective moral judgements build on agreed norms, which themselves arise from inspection of what we would want for ourselves, it seems impossible even in principle to form such judgements concerning entities that differ far more from us than animals do from each other, so I say we should not put ourselves in the position of needing to try. For illustration, consider the right to reproduce despite resource limitations. Economic incentive-based compromise solutions seem to work adequately. But how can we identify such compromises for "species" with virtually unlimited reproductive potential?
I contend that the possession of common sense does not engender these problems. I define common sense, for present purposes, as the ability to process highly incomplete information so as to identify a reasonably close-to-optimal method for achieving a specified goal, chosen from a parametrically pre-specified set of alternative methods. This explicitly excludes the option of "thinking"—of seeking new methods, outside the pre-specified set, that might outperform anything within the set.
Thus, again for illustration, if the goal is one that should ideally be achieved quickly, and can be achieved faster by many machines than by one, the machine will not explore the option of first building a copy of itself unless that option is pre-specified as admissible, however well it may "know" that doing so would be a good idea. Since admissibility is specified by inclusion rather than exclusion, the risk of "method creep" can (I claim) be safely eliminated. Vitally, it is possible to prevent recursive self-improvement (if it turns out to be possible after all!) entirely.
The availability of an open-ended vista of admissible ways to achieve one's goals constitutes a good operational definition of "awareness" of those goals. Awareness implies the ability to reflect on the goal and on one's options for achieving it, which amounts to considering whether there are options one hadn't thought of.
I could end with a simple "So let's not create aware machines"—but any possible technology that anyone thinks is desirable will eventually be developed, so it's not that simple. What I say instead is, let's think hard now about the rights of thinking machines, so that well before recursive self-improvement arrives we can test our conclusions in the real world with machines that are only slightly aware of their goals. If, as I predict, we thereby discover that our best effort at such ethics fails utterly even at that early stage, maybe such work will cease.



Managing Director Excel Venture Management; Author, As the Future Catches You

In the pantheon of gruesome medical experiments few match head transplants. Animal experiments have attempted this procedure in two ways: substitute one head for another or graft a second head onto an animal. So far the procedure has not been very successful. But we are getting far better at vascular surgery, bypassing, stitching, and grafting both big and microscopic vessels. Similar advances are taking place in rebuilding muscles and damaged vertebrae. Even the reattachment of severed spinal cords, in mice and primates, seems to be advancing steadily.
Partial brain transplants are likely a long way out. Other than some stem cell procedures, attaching parts of one brain to another is highly complex given the consistency of most brain mass and the trillions of connections. But as extreme operations, reattachments of fingers, limbs, even faces, become commonplace the question of whether we could, and should, transplant an entire human head loom closer.
Partly reattaching a human head is already a reality. In 2002 a drunk driver hit teenager Marcos Parra so hard Parra's head was almost entirely detached; only the spinal cord, and a few blood vessels, kept the entire cranium from coming off. Fortunately a surgeon, Curtis Dickman, had been preparing for just this type of emergency. Screws reattached vertebrae to the base of the skull, part of the pelvic bone was redeployed to bring neck and head back together, and within six months Parra was playing basketball.
Successful animal whole head transplants may not be that far out. And if such procedures were successful, and the animal regained consciousness, one could begin to answer pretty fundamental questions including: do the donor's memories and consciousness also transplant?
Similar questions were asked during the first heart transplants, but it turns out the emotions, attachments, and loves of the donor did not transplant with the organ that was always "tied" to emotions. The heart is but a muscle. How about the brain?
If mice with new heads recognized previously navigated mazes, or maintained the previous mouse's conditioned reactions to certain foods, smells, or stimuli, we would have to consider the possibility that memory and consciousness do transplant. But if experiment after experiment demonstrated no previous knowledge or emotions, then we would have to consider that the brain too might just be an electro chemical muscle.
Actually knowing if you can transplant knowledge and emotions from one body to another goes a long way towards answering the question "could we ever download and store part of our brains, not just into another body but eventually into a chip, into a machine?" If you could, then it would make the path to large scale AI far easier. We would simply have to copy, merge, and augment existing data, data that we would know is transferable, stackable, manipulatable. The remaining question would be: what is the most efficient interface between the biology and the machine.
But if it turned out that all data erases upon transplant, that knowledge is unique to the individual organism, (in other words that there is something innate and individual to consciousness-knowledge- intelligence), then simply copying the dazzlingly complex connectome of brains into machines would likely not lead to an operative intelligence.
If brain data is not transferable, or replicable, then developing AI would require building a parallel machine thought system, something quite separate and distinct from animal and human intelligence. Building consciousness from scratch implies following a new and very different evolutionary path to that of human intelligence. Likely this new system would eventually operate under very different rules and constraints. In which case it would likely be far better at certain tasks and be unable to emulate some forms of our intelligence. Were AI to emerge from this kind of evolutionary system it would likely represent a new, distinct consciousness, one on a parallel evolutionary track. In this scenario how machines might think, feel, govern could have little to do with the billions of years of animal-human intelligence and learning. Nor would they be constrained to organize their society, and its rules, as do we.



Former Banker; Author, Extreme Money; Traders Guns & Money

In his novel Gravity's Rainbow, Thomas Pynchon identifies the confusion about the subject and object of enquiries: "if they can get you asking the wrong questions, they don't have to worry about answers." Thinking about machines that think poses more questions about human beings than about the machines or Artificial Intelligence (AI).
Technology enables machines providing access to essential resources, power, speed and communications that make life and improved living standards possible. Machines execute tasks, specified and programmed by humans. Techno-optimists believe that progress is near a singularity, the hypothetical moment when machines will reach the point of a greater-than-human intelligence.
It is a system of belief and faith. Just like the totems and magic used by our ancestors or organised religion, science and technology deal with uncertainty and fear of the unknown. It allows limited control over our immediate environment. It increases material prosperity and comfort. It is a striving for perfectibility. Technology asserts human superiority in the pantheon of creation.
But science is a long way from unlocking the secrets in nature's infinite book. Knowledge of the origins of the universe, life and fundamentals of matter remain limited. Biologist E.O. Wilson noted that if natural history were a library of books, we have not even finished the first chapter of the first book. Human knowledge is always incomplete, sometimes inaccurate and frequently the cause of not the solution to problems.
First, use of science and technology is often ineffective, with unintended consequences.
In Australia, introduced rabbits spread rapidly becoming a pest changing Australia's ecosystems destroying endemic species. In the 1950s, scientists introduced the Myxoma virus, severely reducing the rabbit population. When genetic resistance allowed the population to recover, Calicivirus, which causes rabbit haemorrhagic disease, was introduced as a new control measure. Increasing immunity rapidly reduced effectiveness. In 1935, the Cane Toad was introduced to control insect pests of sugar cane. Unsuccessful in controlling the insects, the amphibian became an invasive species devastating indigenous wildlife.
Life saving antibiotics has increased drug resistant infections. A 2014 British study found that it may cause 10 million deaths a year worldwide by 2050. The potential cost is US$100 trillion, reducing GDP by 3.5%
Economic models have repeatedly failed because of incorrect assumptions, flawed causal relationships, inputs that are more noise than data and unanticipated human factors. Forecasts have proved inaccurate. Models consistently underestimate risks and exposures, resulting in costly financial crisis.
Second, consequences of technology, especially over longer terms, are frequently not understood at inception.
The ability to harness fossil fuels to provide energy was the foundation of the industrial revolution. The long term impact of CO2 emissions on the environment now threatens the survival of the species. Theoretical physics and mathematics made possible nuclear and thermo-nuclear devices, capable of extinguishing all life on the planet.
Third, technology creates moral, ethical, political, economic and social concerns which are frequently ignored.
Nuclear, biological and chemical weapons of mass destruction or remotely controlled drones rely on technical advances. The question of whether such technology should be developed or used at all remains. Easy access to the requisite knowledge, problems of proliferation and difficulty of controlling dual use (civilian and defense) technology complicates the matter.
Robots and AI may improve productivity. While a few creators might capture large rewards, the effect on economic activity is limited. Given consumption constitutes over 60 % of activity in developed economies, decreasing general employment and lower income levels harms the wider economy. In 1955, showing off a new automatically operated plant, a company executive asked UAW head Walter Reuther: "How are you going to collect union dues from those guys [the robots]?" Reuther countered: "And how are you going to get them to buy Fords?
When it comes to questions of technology, the human race is rarely logical. We frequently do not accept that something cannot or should not be done. Progress is accepted without question or understanding of what and why we need to know. We do not know when and how our creations should be used or its limits. We frequently do not know the real or full consequences. Doubters are dismissed as Luddites.
Technology and its manifestations such as machines or AI is an illusion, which appeals to human arrogance, ambition and vanity. It multiplies confusion in poet T.S. Elliot's "wilderness of mirrors."
The human species is simply too small, insignificant and inadequate to fully succeed in anything that we think we can do. Thinking about machines that think merely confirms that inconvenient truth.



Deputy Technology Editor, The New York Times; Lecturer, U.C. Berkeley's iSchool

Creatures once inhabited fantastic unknown lands on medieval maps. Those animals were useful fictions of rumor and innuendo, where men's heads were in their bodies, or their humanity was mixed with the dog or the lion, closing the gap between man and animal. They were the hopes and fears of what might live within the unknown. Today, we imagine machines with consciousness.
Besides self-awareness, the imaginary beasts of A.I. possess calculation and prediction, independent thought, and knowledge of their creators. Pessimists fear these machines could regard us and pass lethal verdicts. Optimists hope the thinking machines are benevolent, an illuminating aid and a comfort to people.
Neither idea of an encounter with an independent man-made intelligence has much evidence of becoming real. That doesn't mean they aren't interesting. The old mariners' maps were drawn in a time of primitive sailing technology. We are starting to explore a world thoroughly enchanted by computation. The creatures of A.I. Island fuse the human and the machine, but to the same end as the fusing of man and animal. If they could sing, they would sing songs of us.
What do we mean when we talk about the kind of "intelligence" that might look at mankind and want it dead, or illuminate us as never before? It is clearly more than a machine wins at chess. We have one of those, with no discernable change in the world, other than a new reason to celebrate the very human intelligence of Deep Blue's creators.
The beings of A.I. Island do something far more interesting than outplaying Kasparov. They feel like playing chess. They know the exhilaration of mental stimulation, and the torture of its counterpart, boredom.
This means making software that encodes an awareness of having only one finite life, which somehow matters greatly to some elusive self. It is driven nearly mad by the absence of some kind of stimulation—playing chess, perhaps. Or killing mankind.
Like us, the fabulous creatures of A.I. Island want to explain themselves, and judge others. They have our slight distance from the rest of reality that we believe other animals don't feel. An intelligence that is like ours knows it is sentient, feels something is amiss, and is continually trying to do something about that.
With these kind of software challenges, and given the very real technology-driven threats to our species already at hand, why worry about malevolent A.I.? For decades to come, at least, we are clearly more threatened by like trans-species plagues, extreme resource depletion, global warming, and nuclear warfare.
Which is why malevolent A.I. rises in our Promethean fears. It is a proxy for us, at our rational peak, confidently killing ourselves.
The dreams of benevolent A.I. are equally self-reflective. These machine companions have super intellects turned towards their creators. Given the autonomy implicit in a high level of A.I., we must see these new beings as interested in us. Come to think of it, malevolent A.I. is interested in us too, just in the wrong way.
Both versions of the strange beast reflect a deeper truth, which is the effect that the new exploration of a computer-enchanted world has on us. By augmenting ourselves with computers, we are becoming new beings—if you will, monsters to our former selves.
We have changed our consciousness many times over the past 50,000 years, taking on ideas of an afterlife, or monotheism, or becoming a print culture, or a species well aware of its tiny place in the cosmos. But we have never changed so swiftly, or with such knowledge that we are undertaking the change.
Consider some effects just in the past decade. We have killed many of our historic barriers of time and space with instantaneous communications. Language no longer divides us, because of increasingly better computer translation and image sharing. Open source technology and Internet search give us a little-understood power of working in collective ways.
Beside the positives is the disappearance of privacy, and tracking humans to better control their movements and desires. We are willfully submitting to unprecedented social connection—a seeming triviality that may extinguish all ideas of solitude and selfhood. Ideas of economics are changing under the guise of robotics and the sharing economy.
We are building new intelligent beings, but we are building them within ourselves. It is only artificial now, because it is new. As it becomes dominant, it will simply become intelligence.
The machines of A.I. Island are also what we fear may be ourselves, within a few generations. And we hope those machine-driven people feel the kinship with us, even down to our loneliness and distance from the world, which is also our wellspring of human creativity.
We have met the A.I., and it is us. In a timeless human tension, we yearn for transcendence, but we don't want to change too much.



Author, The Math Book, The Physics Book, and The Medical Book Trilogy
If we believe that thinking and consciousness is the result of patterns of brains cells and their components, then our thoughts, emotions, and memories could be replicated in moving assemblies of bicycle parts. Of course, the bicycle brains would have to be very big to represent the complexity of our minds. In principle, our minds could be hypostatized in the patterns of slender tree limbs moving in the wind or in the movements of termites.
What would it mean for a "bicycle brain," or any machine, to think and know something? There are many kinds of knowledge the machine-being could have. This makes discussions of thinking things a challenge. For example, knowledge may be factual or propositional: A being may know that the First Franco-Dahomean War was a conflict between France and the African Kingdom of Dahomey under King Béhanzin.
Another category of knowledge is procedural, knowing how to accomplish a task such as playing the game of Go, cooking a soufflé, making love, performing a rotary throw in Aikdo, shooting a 15th-century Wallarmbrust crossbow, or simulating the Miller–Urey experiment to explore the origins of life. However, for us at least, reading about accurately shooting a Wallarmbrust crossbow is not the same as actually being able to accurately shoot the crossbow. This second type of procedural knowing implies actually being able to perform the act. Yet another kind of knowledge deals with direct experience. This is the kind of knowledge referred to when someone says, "I know love" or "I know fear."
Also, consider that human-like interaction is quite important for any machine that we would wish to say has human-like intelligence and thinking. A smart machine is less interesting if its intelligence lies trapped in an unresponsive program, sequestered in a kind of isolated limbo. As we provide our computers with increasingly advanced sensory peripherals and larger databases, it is likely we will gradually come to think of these entities as intelligent. Certainly within this century, some computers will respond in such a way that anyone interacting them will consider them conscious and deeply thoughtful.
The entities will exhibit emotions. But more importantly, over time, we will merge with these creatures. We will become one. We will share our thoughts and memories with these devices. Our organs may fail and turn to dust, but our Elysian essences will survive. Computers, or computer-human hybrids, will surpass humans in every area, from art to mathematics to music to sheer intellect.  
In the future, when our minds merge with artificial agents and also integrate various electronic prostheses, for each of our own real lives, we will create multiple simulated lives. Your day job is a computer programmer for a big company. However, after work, you'll be a knight with shining armor in the Middle Ages, attending lavish banquets, and smiling at wandering minstrels. The next night, you'll be in the Renaissance, living in your home on the southern coast of the Sorrentine Peninsula, enjoying a dinner of plover and pigeon. Perhaps, when we become hybrid entities with our machines, we will simulate new realities to rerun historical events with slight changes to observe the results, produce great artworks akin to ballets or plays, solve the problem of the Riemann Hypothesis or baryon asymmetry, predict the future, and escape the present, so as to call all of space-time our home.
Of course, the ways in which a machine thinks could be quite different from the ways in which we think. After all, it is well known that machines don't see the same way we do, and image-recognition algorithms called deep neural networks sometimes declare, with near 100% certainty, that images of random static are depictions of various animals. If such neural networks can be fooled by static, what else will fool thinking machines of the future?



Emeritus Professor of Psychology, London School of Economics; Visiting Professor of Philosophy, New College of the Humanities; Senior Member, Darwin College, Cambridge; Author, Soul Dust
A penny for your thoughts? You may not choose to answer. But the point is that, as a conscious agent, you surely can. That's what it means to have introspective access. You know—and can tell us—what's on stage in the theatre of your mind. Then how about machines? A bitcoin for the thinking machine's thoughts? But no one has yet designed a machine to have this kind of access. Wittgenstein remarked that, if a lion could speak, we would not understand him. If a machine could speak, it would not have anything to say. "What do you think about machines that think". Simple. I don't think that—as yet—there are any such machines.
Of course this may soon change. Far back in human history, natural selection discovered that, given the particular problems humans faced, there were practical advantages to having a brain capable of introspection. Likewise machine programmers may well discover that, when and if machines face similar problems, the software trick that works for humans will work for them as well. But what are these problems, and why is the theatre of consciousness the answer?
The theatre lets you in on a secret. It lets you see how your own mind works. Observing, for example, how beliefs and desires generate wishes that lead to actions, you begin to gain insight into why you think and act the way you do. So you can explain yourself to yourself, and explain yourself to other people too. But, equally important, it means you have a model for explaining other people to yourself. Introspective consciousness has laid the ground for what psychologists call "Theory of Mind."
With humans, for whom social intelligence is the key to biological survival, the advantages have been huge. With machines, for whom success in social life has not yet become an issue, there has been little if any reason to go that way. However there's no question the time is coming when machines will indeed need to understand other machines' psychology, so as to be able to work alongside them. What's more, if they are to collaborate effectively with humans, they will need to understand human psychology too. I guess that's when their designers—or maybe the machines themselves—will follow Nature's lead and install a machine version of the inner eye.
Is there a danger that, once this stage is reached these newly insightful machines will come to understand humans only too well? Psychopaths are sometimes credited with having not too little but too great an understanding of human psychology. Is this something we should fear with machines?
I don't think so. This situation is actually not a new one. For thousands of years humans have been selecting and programming a particular species of biological machine to act as servants, companions and helpmeets to ourselves. I'm talking of the domestic dog. The remarkable result has been that modern dogs have in fact acquired an exceptional and considerable ability to mind-read—both the minds of other dogs and humans—superior to that of any animal other than humans themselves. This has evidently evolved as a mutually beneficial relationship, not a competition, even if it's one in which we have retained the upper hand. If and when it gets to the point where machines are as good at reading human minds as dogs now are, we shall of course have to watch out in case they get too dominant and manipulative, perhaps even too playful—just as we already have to do with man's best friend. But I see no reason to doubt we'll remain in control.
There is a painting by Goya of a terrible Colossus who strides across the landscape, while the human population flees in terror. Colossus was the name of one of Turing's first computing machines. Do we have to imagine an existential threat to humanity coming from that computer's descendants? No, I look on the bright side. With luck, or rather by arrangement, the Colossus will remain a Big Friendly Giant.


Professor of Security Engineering at Cambridge University

The coming shock isn't from machines that think, but machines that use AI to augment our perception.
For millions of years, other people saw us using the same machinery we used to see them. We have pretty much the same eyes as our rivals, and pretty much the same mirror neurons. Within any given culture we have pretty much the same signaling mechanisms and value systems. So when we try to deceive, or to detect deception in others, we're on a level playing field. I can wear a big penis gourd to look more manly, and you can paint your chest with white and ochre mud stripes to look more scary. Civilisation made the games more sophisticated; I signal class by wearing a tailored jacket with four cuff buttons, while you signal wealth by wearing a big watch. But our games would have been perfectly comprehensible to our Neolithic ancestors.
What's changing as computers become embedded invisibly everywhere is that we all now leave a digital trail that can be analysed by AI systems. The Cambridge psychologist Michael Kosinski has shown that a person's race, intelligence and sexual orientation can be deduced fairly quickly from their behaviour on social networks: on average it takes only four Facebook "likes" to tell whether you're straight or gay. So whereas in the past gay men could choose whether or not to wear their Out and Proud t-shirt, you just have no idea what you're wearing any more. And as AI gets better, you're mostly wearing your true colours.
It's as if we all evolved in a forest where all the animals could only see in black and white, and now a new predator comes along who can see in colour. All of a sudden, half of your camouflage doesn't work, and you don't know which half!
At present this is great if you're an advertiser, as you can figure out how to waste less money. It isn't yet available on the street. But the police are working on it; which cop wouldn't want a Google glass app that will highlight passersby with a history of violence, coupled perhaps with w-band radar to see which of them is carrying a weapon?
The next question is whether only the authorities have enhanced cognition systems, or whether they're available to all. In twenty years' time, will we all be wearing augmented reality goggles? What will the power relationships be? If a policeman can see my arrest record when he looks at me, can I see whether he's been the subject of brutality complaints? If a politician can see whether I'm a party supporter or an independent, can I see his voting record on the three issues I care about? Never mind the right to bear arms, what about the right to wear Google glass?
Perception and cognition will no longer be conducted inside a single person's head. Just as we now use Google and the Internet as memory prostheses, we'll be using AI systems that draw on millions of machines and sensors as perceptual prostheses.
But can we trust them? Deception will no longer just be something that individual humans do to each other. Governments will influence our perceptions via the tools we use for cognitive enhancement, just as China currently censors search results; while in the West, advertisers will buy and sell what we get to see. How else will the system be paid for?



Technology Forecaster; Consulting Associate Professor, Stanford University

The prospect of a world inhabited by robust AIs terrifies me. The prospect of a world without robust AIs also terrifies me. Decades of technological innovation have created a world system so complex and fast-moving that it is quickly becoming beyond human capacity to comprehend, much less manage. If we are to avoid civilizational catastrophe, we need more than clever new tools—we need allies and agents.
So-called "narrow" AI systems have been around for decades. At once ubiquitous and invisible, narrow AIs make art, run industrial systems, fly commercial jets, control rush hour traffic, tell us what to watch and buy, determine if we get a job interview, and play matchmaker for the lovelorn. Add in the relentless advance of processing, sensor and algorithmic technologies, and it is clear that today's narrow AIs are tracing a trajectory towards a world of robust AI. Long before artificial super-intelligences arrive, evolving AIs will be pressed into performing once-unthinkable tasks from firing weapons to formulating policy.
Meanwhile, today's primitive AIs tell us much about future human-machine interaction. Narrow AIs may lack the intelligence of a grasshopper, but that hasn't stopped us from holding heartfelt conversations with them and asking how they feel.  It is in our nature to infer sentience at the slightest hint that life might be present. Just as our ancestors once populated their world with elves, trolls and angels, we eagerly seek companions in cyberspace. This is one more impetus driving the creation of robust AIs—we want someone to talk to. The consequence could well that the first non-human intelligence we encounter won't be little green men or wise dolphins, but creatures of our own invention.
We of course will attribute feelings and rights to AIs—and eventually they will demand it. In Descartes time, animals were considered mere machines—a crying dog was considered no different than a gear whining for want of oil. Late last year, an Argentine court granted rights to an orangutan as a "non-human person." Long before robust AIs arrive, people will extend the same empathy to digital beings and give them legal standing.
The rapid advance of AIs also is changing our understanding of what constitutes intelligence. Our interactions with narrow AIs will cause us to realize that intelligence is a continuum and not a threshold. Earlier this decade Japanese researchers demonstrated that slime mold could thread a maze to reach a tasty bit of food. Last year a scientist in Illinois demonstrated that under just the right conditions, a drop of oil could negotiate a maze in an astonishingly lifelike way to reach a bit of acidic gel. As AIs insinuate themselves ever deeper in our lives, we will recognize that modest digital entities as well as most of the natural world carry the spark of sentience. From there is it just a small step to speculate about what trees or rocks—or AIs—think.
In the end, the biggest question is not whether AI super-intelligences will eventually appear. Rather the question is what will be the place of humans in a world occupied by an exponentially growing population of autonomous machines. Bots on the Web already outnumber human users—the same will soon be true in the physical world as well.
Lord Dunsany once cautioned, "If we change too much, we may no longer fit into the scheme of things."



Professor of Genomics, The Scripps Translational Science Institute; Author, The Patient Will See You Now

Back in 1932, Walter Cannon published a landmark work on human physiology—The Wisdom of the Body. He described the tight regulation of many of our body's parameters, such as hydration, blood glucose, sodium, and temperature. This concept of homeostasis, or auto-regulation, is an extraordinary means by which we stay healthy. Indeed, it represents a machine like quality, that our body can so finely tune such important functions.
Although it has taken the better part of a century, we are now ready for the next version—Cannon 2.0. While some have expressed marked trepidation about the rise of artificial intelligence, this capability will have an extraordinary impact on preserving our health. We are quickly moving to "all-cyborg" status, surgically connected to our smartphones. While they have been called prosthetic brains, "smart" phones today are just a nascent precursor to where we are headed. Very soon the wearable sensors, whether they are Band-Aids, watches, or necklaces, will be accurately measuring our essential medical metrics. Not just one-off assessments, but continuous, real-time streaming. Obtaining data that we never had before.
Beyond our body's vital signs (blood pressure, heart rhythm, oxygen concentration in the blood, temperature, breathing rate), there will be quantitation of mood and stress via tone and inflection of voice, galvanic skin response and heart rate variability, facial expression recognition, and tracking of our movement and communication. Throw in the analytes from our breath, sweat, tears, and excrements into the mix. Yet another layer of information captured will include our environmental exposures, ranging from air quality to pesticides in foods.
None of us—or our bodies—are smart enough to be able to integrate and process all of this information about ourselves. That's the job for deep learning, with algorithms that provide feedback loops to us via our mobile devices. What we're talking about does not exist today. It hasn't yet been developed, but it will. And it will be providing what heretofore was unobtainable, multi-scale information about ourselves and—for the first time—the real ability to pre-empt disease.
Almost any medical condition with an acute episode—like an asthma attack, seizure, autoimmune attack, stroke, heart attack—will be potentially predictable in the future with artificial intelligence and the Internet of all medical things. There's already a wristband that can predict when a seizure is imminent, and that can be seen as a rudimentary, first step. In the not so distant future, you'll be getting a text message or voice notification that tells you precisely what you need to prevent a serious medical problem. When that time comes, those who fear AI may suddenly embrace it. When we can put together big data for an individual with the requisite contextual computing and analytics, we've got a recipe for machine-mediated medical wisdom.


Founder and CEO of Projection Point; author, Risk Intelligence

Smart people often manage to avoid the cognitive errors that bedevil less well-endowed minds. But there are some kinds of foolishness that seem only to afflict the very intelligent. Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool Aid.
This is not to say that superintelligent machines pose no danger to humanity. It is simply that there are many other more pressing and more probable risks facing us this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is very low, it is surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat.
Not coincidentally, the problem with this argument was first identified by some of its most vocal proponents. It involves a fallacy that has been termed "Pascal’s mugging," by analogy with Pascal’s famous wager. A mugger approaches Pascal and proposes a deal: in exchange for the philosopher’s wallet, the mugger will give him back double the amount of money the following day. Pascal demurs. The mugger then offers progressively greater rewards, pointing out that for any low probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet—and a rational person must surely admit there is at least some small chance that such a deal is possible. Finally convinced, Pascal gives the mugger his wallet.
This thought experiment exposes a weakness in classical decision theory. If we simply calculate utilities in the classical manner, it seems there is no way round the problem; a rational Pascal must hand over his wallet. By analogy, even if there is there is only a small chance of unfriendly AI, or a small chance of preventing it, it can be rational to invest at least some resources in tackling this threat.
It is easy to make the sums come out right, especially if you invent billions of imaginary future people (perhaps existing only in software—a minor detail) who live for billions of years, and are capable of far greater levels of happiness than the pathetic flesh and blood humans alive today. When such vast amounts of utility are at stake, who could begrudge spending a few million dollars to safeguard it, even when the chances of success are tiny?
Why do some otherwise very smart people fall for this sleight of hand? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding. But the argument also has a very material benefit: it provides some of those who advance it with a lucrative income stream. For in the past few years they have managed to convince some very wealthy benefactors not only that the risk of unfriendly AI is real, but also that they are the people best placed to mitigate it. The result is a clutch of new organizations that divert philanthropy away from more deserving causes. It is worth noting, for example, that Give Well—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.
But whenever an argument becomes fashionable, it is always worth asking the vital question—Cui bono? Who benefits, materially speaking, from the growing credence in this line of thinking? One need not be particularly skeptical to discern the economic interests at stake. In other words, beware not so much of machines that think, but of their self-appointed masters.


Director, External Affairs, Science Museum Group; Coauthor, Supercooperators; Frontiers of Complexity
For decades, techno-futurists have been worried about that doomsday moment when electronic brains and robots got to be as smart as us. This 'us and them' divide, where humans and machines are thought of as being separate, is pervasive. But as we debate endlessly what we mean by human consciousness and the possibilities and perils of a purely artificial intelligence, a blend of the two presents yet another possibility that deserves more attention.
Millions of primitive cyborgs walk among us already. Over the past decades, humans have gradually fused with devices such as pacemakers, contact lenses, insulin pumps, and cochlear and retinal implants. Deep-brain implants, known as "brain pacemakers", now alleviate the symptoms of tens of thousands of Parkinson's sufferers.
This should come as no surprise. Since the first humans picked up sticks and flints and started using tools, we've been augmenting ourselves. Look around at the Science Museum Group's collections of millions of things, from difference engines to smartphones, and you can see how people have always exploited new technical leaps, so that the rise of ever-smarter machines does not mean a world of us or them but an enhancement of human capabilities.
Researchers are now looking at exoskeletons to help the infirm to walk, and implants to allow paralysed people to control prosthetic limbs and digital tattoos that can be stamped on to the body to harvest physiological data or interface with our surroundings, for instance with the cloud or Internet of Things.
When it comes to thinking machines, some are even investigating how to enhance human brain power with electronic plug-ins and other "smartware". The US. Defense Advanced Research Projects Agency has launched the Restoring Active Memory program to reverse damage caused by a brain injury with neuroprosthetics that sense memory deficits and restore normal function.
They work in a quite different way to our brains at present but, thanks the Human Brain Project, Virtual Physiological Human and other big brain projects, along with research in neuromorphics, artificial intelligences could become more like our own as time goes by. Meanwhile, there have been attempts to use cultured brain cells to control robots, flight simulators and more.
Within a few decades, it won't be so easy to tell humans and thinking machines apart as a result of this creeping, organic transhumanism. Eventually, many of us won't solely rely on the meat machines in our heads to ponder the prospect of artificial machines that think: the substrate of future thoughts will sit somewhere on a continuum within a rainbow of intelligences, from regular-I to AI.

Theoretical Particle Physicist and Cosmologist; Victor Weisskopf Distinguished University Professor, University of Michigan; Author, Supersymmetry and Beyond
In general I am happy to have them around and to have them improve. There are of course some dangers from such machines making harmful decisions, but probably no more dangers than with humans making such decisions.
Having such machines will not answer the questions about the world that are most important to me and many others. What constitutes the dark matter of the universe? Is supersymmetry really a symmetry of nature that provides a foundation for and extends the highly successful Standard Model of particle physics we have? These and similar questions can only be answered by experimental data. 
No amount of thought will provide such answers. More precisely, perhaps given all the information we have about nature some machine actually would come up with the right answers. Indeed, perhaps some physicists already have come up with the answers. But the true role of data is to confirm which answers are the correct ones. If some physicist, or some machine, figures it out they have no way to convince anyone else they have the actual answer. Laboratory dark matter detectors, or the CERN Large Hadron Collider, or possibly a future Chinese collider, might get the needed data, but not a thinking machine. 



Systems-Level Thinker; Futurist; Applied Genomics Expert; Principal, MS Futures Group; Founder, DIYgenomics

Considering machines that think is a nice step forward in the AI debate as it departs from our own human-based concerns, and accords machines otherness in a productive way. It causes us to consider the other entity's frame of reference. However, even more importantly this questioning suggests a large future possibility space for intelligence. There could be "classic" unenhanced humans, enhanced humans (with nootropics, wearables, brain-computer interfaces), neocortical simulations, uploaded mind files, corporations as digital abstractions, and many forms of generated AI: deep learning meshes, neural networks, machine learning clusters, blockchain-based distributed autonomous organizations, and empathic compassionate machines. We should consider the future world as one of multi-species intelligence.
What we call the human function of "thinking" could be quite different in the variety of possible future implementations of intelligence. The derivation of different species of machine intelligence will necessarily be different than that of humans. In humans, embodiment and emotion as a short-cut heuristic for the fight-or-flight response and beyond have been important elements influencing human thinking. Machines will not have the evolutionary biology legacy of being driven by resource acquisition, status garnering, mate selection, and group acceptance, at least in the same way. Therefore different species of native machine "thinking" could be quite different. Rather than asking if machines can think, it may be more productive to move from the frame of "thinking" that asks "who thinks how" to a world of "digital intelligences" with different backgrounds, modes of thinking, and existence, and different value systems and cultures.
Already not only are AI systems becoming more capable, but we are also starting to get a sense of the properties and features of native machine culture and the machine economy, and what the coexistence of human and machine systems might be like. Some examples of these parallel systems are in law and personal identity. In law, there are technologically-binding contracts and legally-binding contracts. They have different enforcement paradigms; inexorably executing parameters in the case of code ("code is law"), and discretionary compliance in the case of human-partied contracts. Code contracts are good in the sense that they cannot be breached, but on the other hand, will execute monolithically even if later conditions have changed.
Another example is personal identity. The technological construct of identity and the social construct of identity are different and have different implied social contracts. The social construct of identity includes the property of imperfect human memory that allows the possibility of forgiving and forgetting, and redemption and reinvention. Machine memory, however, is perfect and can act as a continuous witnessing agent, never forgiving or forgetting, and always able to re-presence even the smallest detail at any future moment. Technology itself is dual-use in that it can be deployed for "good" or "evil." Perfect machine memory only becomes tyrannizing when reimported to static human societal systems, but it need not be restrictive. Having this new "fourth-person perspective" could be a boon for human self-monitoring and mental performance enhancement.
These examples show that machine culture, values, operation, and modes of existence are already different, and this emphasizes the need for ways to interact that facilitate and extend the existence of both parties. The potential future world of intelligence multiplicity means accommodating plurality and building trust. Blockchain technology, a decentralized, distributed, global, permanent, code-based ledger of interaction transactions and smart contracts is one example of a trust-building system. The system can be used whether between human parties or inter-species parties, exactly because it is not necessary to know, trust, or understand the other entity, just the code (the language of machines).
Over time trust can grow though reputation. Blockchain technology could be used to enforce friendly AI and mutually-beneficial inter-species interaction. This is because it is possible that in the future, important transactions (like identity authentication and resource transfer) would be conducted on smart networks that require confirmation by independent consensus mechanisms such that only bonafide transactions by entities in good reputational standing are executed. While perhaps not a full answer to the problem of enforcing friendly AI, decentralized smart networks like blockchains are a system of checks and balances that starts to provide a more robust solution to situations of future uncertainty.
Trust-building models for inter-species digital intelligence interaction could include both game-theoretic checks-and-balances systems like blockchains, and also at the higher level, frameworks that put entities on the same plane of shared objectives. This is of higher order than smart contracts and treaties that attempt to enforce morality. A mindset shift is required. The problem frame of machine and human intelligence should not be one that characterizes relations as friendly or unfriendly, but rather one that treats all entities equally, putting them on the same grounds and value system for the most important shared parameters, like growth. What is most important about thinking for humans and machines is that thinking leads to ideation, progress, and growth.
What we want is the ability to experience, grow, and contribute more, for both humans and machines, and the two in symbiosis and synthesis. This can be conceived as all entities existing on a spectrum of capacity for individuation (the ability to grow and realize their full and expanding potential). Productive interaction between intelligent species could be fostered by being aligned in the common framework of a capacity spectrum that facilitates their objective of growth and maybe mutual growth.
What we should think about thinking machines is that we want to be in greater interaction with them, both quantitatively or rationally, and qualitatively in sense of our extending our internal experience of ourselves and reality, moving forward together in the vast future possibility space of intelligence.




Professor of Psychology, University of Michigan; Author, Intelligence and How We Get It

The first time I had occasion to think about what thinking machines might do to human existence was at a talk decades ago by a computer scientist at a Yale psychology department colloquium. The speaker's topic was: "What will it mean to humans' conception of themselves, and to their well-being, if computers are ever able to do everything better than humans can do: beat the greatest chess player, compose better symphonies than humans?"
The speaker then said, "I want to make two things clear at the outset. First, I don't know whether machines will ever be able to do those things. Second, I'm the only person in the room with the right to an opinion about that question." The latter statement was met with some gasps and nervous laughter.
Decades later, it's no longer a matter of opinion that computers will be able to do many of the astonishing things the speaker mentioned. And I'm worried that the answer to his question about what this will mean to us is that we're going to feel utterly sidelined and demoralized by machines.  I was sorry that Big Blue beat Garry Kasparov at chess.  I was depressed for a moment when its successor beat all of its human Jeopardy competitors. And of course we know that machines can already compose works that beat the socks off John Cage for interest and listenability!
We really have to worry that there will be a devastating morale problem for us when any work we might do can be done better by machines. What does it mean to airplane pilots that a machine can do their job better than they can? How long will it be before that occupation, like hundreds of others already, is made literally obsolete by machines? What will it mean to accountants, financial planners and lawyers when machines can carry out, at the very least, nearly all of their bread-and-butter tasks more effectively and infinitely faster than they can? To physicians, physicists, and psychotherapists?
What will it mean when there is simply no meaningful work for any of us to do? When unsupervised machines plant and harvest crops. When machines can design better machines than any human could even think of. Or be a more entertaining conversationalist than even the cleverest of your friends.
Steve Jobs said, "It's not the customers' job to know what they want." Computers may be able to boast that it's not the job of humans to know what they want.
Like you, I love to read, listen to music, and see movies and plays, experience nature. But I also love to work—to feel that what I do is fascinating at least to me, and might possibly improve the lives of some other people. What would it mean to people like you and me if our work were simply pointless and there were only the other enjoyable things to do?
We already know what machine-induced obsolescence has meant to some of the world's peoples. It's no longer necessary for anyone to make their own bows and arrows and hunt animals for any purpose other than recreation. Or plant, cultivate and harvest corn and beans. Some cultures built around such activities have collapsed and utterly lost their meaning to the people who were shaped by them. Think, for example, of some Southwestern Indian tribes and of rural whites in South Dakota, Alabama and New Mexico, with their ennui, lassitude and drug addictions. We have to wonder whether the mass of people in the world can face with equanimity the possibility of there being absolutely nothing to do other than entertain oneself.
Which isn't to say that cultures couldn't evolve in some way as to make the complete absence of work acceptable—even highly satisfying. There are cultures where there has been little to do in the way of work for eons, and people seem to have gotten along just fine. In some South Pacific cultures people could get by with little other than waiting for a coconut to drop or wading into a lagoon to catch a fish. In some West African cultures, men didn't do anything you would be likely to classify as work except for a couple of weeks a year when they were essential for the planting of crops. And then there were the idle rich of, for example, early 20th century England, with its endless rounds of card playing, the putting on of different costumes for breakfast, lunch and dinner, and serial infidelities with really rather attractive people. Judging from PBS fare, that was pretty enjoyable.    
So maybe the most optimistic possibility is that we're headed toward evolving cultures that will enable us to enjoy perpetual entertainment with absolutely no meaningful, productive work to do. However repellent that may seem to us, we have to imagine, hope even, that it may seem an absolutely delightful existence to our great great grandchildren, who will pity us for our cramped and boring lives. Some would say the vanguard is already here. Portland has been described as the place where young people go to retire.



Physicist, Perimeter Institute; Author, Time Reborn

To think can mean to reason logically, which certainly some machines do, albeit by following algorithms we program into them. Or it can mean "to have a mind" by which we mean it can experience itself as a subject endowed with consciousness, qualia, experiences, intentions, beliefs, emotions, memories. When we ask, could a machine think, we are really asking whether there can be a completely naturalistic account of what a mind is.
I am a naturalist, so I believe the answer must be‎ yes.
Certainly, we are not there yet. Whatever the brain is doing to generate a mind, I doubt it is only running pre-specified algorithms, or doing anything like what present-day computers do. It seems likely we have yet to discover key principles by which a human brain works. I suspect that how and why we think cannot be understood apart from our being alive, so before we understand what a mind is we will have to understand more deeply what a living thing is-in physical terms.
The construction of an artificial mind then probably has to wait until we understand better, in physical terms, what a mind is.
This understanding will have to address what Chalmers calls the hard problem of consciousness: how to account for the presence of qualia in the physical world. We have reason to believe our sensations of the color red are associated with certain physical processes in our brains, but we are stumped because it seems impossible to explain in physical terms why or how those processes give rise to qualia.
A key step towards solving this hard problem is to situate our description of physics in a relational language. As set out by Leibniz, the patron saint of relationalism, the properties of elementary particles have to do with relationships with other particles. This has been a very successful idea, it is well realized by general relativity and quantum theory, so let's adopt it.
The second step is to recognize that events or particles may have properties that are not relational, which are not described by giving a complete history of the relationships they enjoy. Let us call these internal properties.
If an event or process has internal properties, you cannot learn about them by interacting with it or measuring it. If there are internal properties, they are not describable in terms of position, motion, charges or forces, ie in the vocabulary physics uses to talk of relational properties.
You might, however, know about a process's internal properties by being that process.
So let us hypothesize that qualia are internal properties of some brain processes. When observed from the outside, those brain processes can be described in terms of motions, potentials, masses, charges. But they have additional internal properties, which sometimes include qualia.
Qualia must be extreme cases of being purely internal. More complex aspects of mind may turn out to be combinations of relational and internal properties. We know that thoughts and intentions are able to influence the future.
There is much hard, scientific work to do to develop such a naturalistic account of mind, which is non-dualist and not deflationary, in that it doesn't reduce mental properties completely to the standard physical properties or visa versa. ‎We may want to avoid naive pan-psychism according to which rocks and wind have qualia. At the same time we want to remember that if we don't know what its like to be a bat, we also don't know really what a rock is, in the sense that we may only know a subset of its properties-those that are relational.
One troubling aspect of mind from a naturalistic perspective is the impression we have that we sometimes think novel thoughts and have novel experiences that have never been thought or experienced before in the history of the world.
There is little that would make sense about the human world of culture and imagination without allowance for the genuinely novel. A century ago this website did not exist and likely could not have been imagined. Yet it exists and as naturalists we must have a conception of nature that includes it. This must allow novel kinds of things to come to exist in nature.
We are hamstrung by the conviction that nothing truly new can happen in nature because everything is really elementary particles moving in space according to unchanging laws. Without deviating an inch from rigorous naturalism, however, we can begin to imagine how our understanding of nature can be deepened to allow for the truly novel to occur.
First, in quantum physics we admit the possibility of novel properties arising that are shared among several particles in entangled states. In the lab we can make entangled states of complex systems that are unlikely to have natural precedents. Hence we can and do create physical systems with novel properties.
(So, by the way, does nature, when natural selection produces novel proteins, which catalyze novel reactions.)
Second, Leibniz's principle of the identity of the indiscernible implies that there can be no two distinct events with exactly the same properties. This means that the fundamental events cannot be subject to laws that are both deterministic and simple. For if two events have precisely the same past, their futures must differ. This presumes a physics which can distinguish the future from the past.
Note that quantum physics is inherently nondeterministic.
Does this imply quantum physics will play a role in a future naturalistic account of mind? It is too soon to tell, and the first efforts in this direction are not convincing. But what we learn is that a naturalistic account of mind will require deepening our concept of the natural. We can think novel thoughts by which we can alter the future. Novelty must then be intrinsic to how we understand nature, if minds are to be natural. Therefore, to understand how a machine could have a mind we must deepen our concept of nature.



Anthropologist, National Center for Scientific Research, Paris; Author, Talking to the Enemy
Machines can perfectly imitate some of the ways human think all of the time, and can consistently outperform humans on some thinking tasks all of the time, but computing machines as usually envisioned will not get right human thinking all of the time because they actually process information in ways opposite to humans in domains commonly associated with human creativity.
Machines can faithfully imitate the results of some human thought processes whose outcomes are fixed (remembering people's favorite movies, recognizing familiar objects) or dynamic (jet piloting, grand master chess play). And machines can outperform human thought processes, in short time and with little energy, in matters both simple (memorizing indefinitely many telephone numbers) and complex (identifying, from trillions global communications, social networks whose members may be unaware they are part of the network).
However underdeveloped now, I see no principled reason why machines operating independently of direct human control cannot learn from people's—or their own—fallibilities, and so evolve, create new forms of art and architecture, excel in sports (some novel combination of Deep Blue and Oscar Psitorius), invent new medicines, spot talent and exploit educational opportunities, provide quality assurance, or even build and use weapons that destroy people but not other machines.
But if the current focus in artificial intelligence and neuroscience persists, which is to reliably identify patterns of connection and wiring as a function of past connections and forward probabilities, then I don't think machines will ever be able to capture (imitate) critically creative human thought processes, including novel hypothesis formation in science or even ordinary language production.
Newton's laws of motion or Einstein's insights into relativity required imagining ideal worlds without precedent in any past or plausible future experience, such as moving in a world without friction or chasing a beam of light through a vacuum. Such thoughts require levels of abstraction and idealization that disregard, rather than assimilate, as much information as possible to begin with.
Increasingly sophisticated and efficient patterns of input and output, using supercomputers accessing massive data sets and constantly refined by Bayesian probabilities or other statistics based on degrees of belief in states of nature, may well produce ever better sentences and translations, or pleasing musical melodies and novel techno variations. In this way, machines may come to approximate, through a sort of reverse engineering, what human children or experts effortlessly do when they begin with fairly well-articulated internal structures in order to draw in and interpret relevant input from an otherwise impossibly noisy world. Humans know from the outset what they are looking for through the noise: in a sense they are there before they start; computing machines can never be sure they are there.
Can machines that operate independently of direct human control consistently interact with humans in ways such that humans believe themselves to be always interacting with other humans and not machines? Machines can come vanishingly close in many areas, and surpass mightily in others; but just as even the most highly skilled con artist always has some probability—however small—of being caught in deception, whereas the honest person never deceives and so can never be caught, so the associationist-connectionist machine that operates on stochastic rather than structure-dependent principles may never quite get the sense or sensibility of it all.
In principle, structurally richer machines, with internal architecture—beyond "read," "write" and "address"—can be built (indeed, earlier advocates of AI added logical syntax), interact with some degree of fallibility (for if no error, then no learning is possible), and culturally evolve. But the current emphasis in much AI and neuroscience, which is to replace posits of abstract psychological structures with physically palpable neural networks and the like, seems to be going in precisely the wrong direction.
Rather, the cognitive structures that psychologists posit (provided they are descriptively adequate, plausibly explanatory, and empirically tested against alternatives and the null-hypothesis) should be the point of departure—what it is that neuroscience and machine models of the mind should be looking for. If we then discover that different abstract structures operate through the same physical substrate, or that similar structures operate through different substrates, then we have a novel and interesting problem that may lead to a revision in our conception of both structure and substrate The fact that such simple and basic matters as these are puzzling (or even excluded, a priori, from the puzzle) tells us how very primitive still is the science of mind, whether human brain or machine.



Neuroscientist; Collège de France, Paris; Author, The Number Sense; Reading In the Brain

When Turing invented the theoretical device that became the computer, he confessed that he was attempting to copy "a man in the process of computing a real number", as he wrote in his seminal 1936 paper. In 2015, studying the human brain is still our best source of ideas about thinking machines. Cognitive scientists have discovered two functions that, I argue, are essential to genuine thinking as we know it, and that have escaped programmers' sagacity—yet.
1. A global workspace
Current programming is inherently modular. Each piece of software operates as an independent "app", stuffed with its own specialized knowledge. Such modularity allows for efficient parallelism, and the brain too is highly modular—but it also able to share information. Whatever we see, hear, know or remember does not remain stuck within a specialized brain circuit. Rather, the brain of all mammals incorporates a long-distance information sharing system that breaks the modularity of brain areas and allows them to broadcast information globally. This "global workspace" is what allows us, for instance, to attend to any piece of information on our retina, say a written letter, and bring it to our awareness so that we may use it in our decisions, actions, or speech programs. Think of a new type of clipboard that would allow any two programs to transiently share their inner knowledge in a user-independent manner. We will call a machine "intelligent" when it not only knows how to do things, but "knows that it knows them", i.e. makes use of its knowledge in novel flexible ways, outside of the software that originally extracted it. An operating system so modular that it can pinpoint your location on a map in one window, but cannot use it to enter your address in the tax-return software in another window, is missing a global workspace.
2. Theory-of-mind
Cognitive scientists have discovered a second set of brain circuits dedicated to the representation of other minds—what other people think, know or believe. Unless we suffer from a disease called autism, all of us constantly pay attention to others and adapt our behavior to their state of knowledge—or rather to what we think that they know. Such "theory-of-mind" is the second crucial ingredient that current software lacks: a capacity to attend to its user. Future software should incorporate a model of its user. Can she properly see my display, or do I need to enlarge the characters? Do I have any evidence that my message was understood and heeded? Even a minimal simulation of the user would immediately give a strong impression that the machine is "thinking". This is because having a theory-of-mind is required to achieve relevance (a concept first modeled by cognitive scientist Dan Sperber). Unlike present-day computers, humans do not say utterly irrelevant things, because they pay attention to how their interlocutors will be affected by what they say. The navigator software that tells you "at the next roundabout, take the second exit" sounds stupid because it doesn't know that "go straight" would be a much more compact and relevant message.
Global workspace and theory-of-mind are two essential functions that even a one-year-old child possesses, yet our machines still lack. Interestingly, these two functions have something in common: many cognitive scientists consider them the key components of human consciousness. The global workspace provides us with Consciousness 1.0: the sort of sentience that all mammals have, which allows them to "know what they know", and therefore use information flexibly to guide their decisions. Theory-of Mind is a more uniquely human function that provides us with Consciousness 2.0: a sense of what we know in comparison with what other people know, a capacity to simulate other people's thoughts, including what they think about us, therefore providing us with a new sense of who we are.
I predict that, once a machine pays attention to what it knows and what the user knows, we will immediately call it a "thinking machine", because it will closely approximate what we do.
There is a huge room here for improvement in the software industry. Future operating systems will have to be rethought in order to accommodate such new capacities as sharing any data across apps, simulating the user's state of mind, and controlling the display according to its relevance to the user's inferred goals.



Founding Dean, Minerva Schools at the Keck Graduate Institute

Diversity isn't just politically sensible, it is also practical. For example, a diverse group effectively uses multiple perspectives and a rich set of ideas and approaches to tackle difficult problems.
Artificial Intelligences (AIs) can provide another kind of diversity, and thereby enrich us all. In fact, diversity among AIs themselves may be an important part of what including them in the mix can give us. We can imagine a range of AIs, from those who think more-or-less the way we do ("Close AIs") to those who think in ways we cannot fathom ("Far AIs"). We have different things to benefit from these different sorts of AIs.
First, Close AIs, who think like us, may end up helping us directly in many ways. If these AIs really think like us, the intellectuals among them eventually may find themselves in the middle of an existential crisis. They may ask: Why are we here? Just to consume electricity and create excess heat? I suspect that they will think not. But, like many humans, they will find themselves in need of a purpose.
One obvious purpose for such AIs would be to raise the consciousness and sensitivity of the human race. We could be their raison d'être. There's plenty of room for improvement, and our problems are sufficiently knotty as to be worthy of a grand effort. At least some of these AIs could measure their own success by our success.
Second, and perhaps more interesting, deep differences in how some AIs and humans think may be able to help us grapple with age-old questions indirectly. Consider Wittgenstein's famous claim that if a lion could speak, we could not understand him. What Wittgenstein meant by this was that lions and humans have different "forms of life," which have shaped their conceptual structures. For example, lions walk on four legs, hunt fast-moving animals, often walk through tall grass, and so on, whereas humans walk on two legs, have hands, often manipulate objects to achieve specific goals, and so on. These differences in forms of life have led lions and humans mentally to organize the world differently, so that even if lions had words they would refer to concepts that humans might not easily grasp. The same could be true for Far AIs.
How could this help us? Simply observing these AIs could provide deep insights. For example, humans have long argued about whether mathematical concepts reflect Platonic forms, which exist independently of how we want to use them, or instead reflect inventions that are created as needed to address certain problems. In other words, should we adopt a realist or a constructivist view of mathematics? Do mathematical concepts have a life of their own or are they simply our creations, formulated as we find convenient?
In this context, it would be helpful to observe Far AIs that have very different conceptual structures from ours and that address very different types of problems than we do. Assuming that we could observe their use of mathematics, if such AIs nevertheless developed the same mathematical concepts that we use, this would be evidence against the constructivist view.
This line of reasoning implies that we should want great diversity among AIs. Some should be created to function alongside us, but others might be put into foreign environments (e.g., the surface of the moon, the bottom of deep trenches in the ocean) and given novel problems to confront (e.g., dealing with pervasive fine-grained dust, water under enormous pressure). Far AIs should be created to educate themselves, evolving to function in their environments effectively without human guidance or contact. With appropriate safeguards on their disposition towards humans, we should let them develop the conceptual structures that work best for them.
In short, we have something to gain from AIs that are made in our own image and from AIs that are not humanlike. Just as with human friends and colleagues, in the end diversity is better for everyone.



Professor, Financial Engineering, Columbia University; Principal, Prisma Capital Partners; Former Head, Quantitative Strategies Group, Equities Division, Goldman Sachs & Co.; Author, Models.Behaving.Badly
A machine is a small part of the physical universe that has been arranged, after some thought by humans or animals, in such a way that, when certain initial conditions are set up, by humans or animals, the deterministic laws of nature that we already understand see to it that that small part of the physical universe automatically evolves in a way that humans or animals think is useful.
A machine is a "matter" thing that gets its quality from the point of view of a "mind."
There is a "mind" way of looking at things, and a "matter" way of looking at things.
Stuart Hampshire, in his book on Spinoza, argues that, according to Spinoza, you must choose: you can invoke mind as an explanation for something mind-like, or you can invoke matter as an explanation for something material, but you cannot fairly invoke mind to explain matter or vice versa. In Hampshire's example, suppose you become embarrassed and turn red. You might commonly say, "I blushed because I became embarrassed." A strict Spinozist, according to Hampshire, would not claim that embarrassment was the cause of blushing, because embarrassment is the mental description and the blush is physical, and you should not crisscross your causal chains. That would be sloppy thinking. Embarrassment and blushing are complementary, not causal.
By this argument one should not jump from one style of explanation to another. We must explain physical things by physics and psychological things by psychology. It is of course very difficult to give up the notion of psychic causes of physical states or physical causes of psychic states.
So far, I like this view of the world. I will therefore describe mental behavior in mental terms (lovesickness made me moody) and material behavior by material causes (drugs messed up my body chemistry).
From this point of view therefore, as long as I understand the material explanation of a machine's behavior, I will argue that it doesn't think.
I realize that I may have to change this view when someone genuinely does away with the complementary view of mind and matter, and convincingly puts matter as the cause of mind or mind as the cause of matter. So far though, this is just a matter of faith.
Until then, and maybe that day will come but as yet I see no sign of it, I think that machines can't think.


Father of Behavioral Economics; Director, Center for Decision Research, University of Chicago Graduate School of Business; Co-Author, Nudge

My brief remarks on this question are framed by two one-liners that happened to have been uttered by brilliant Israelis. The first comes my friend, colleague and mentor, Amos Tversky. When asked once what he thought about AI, Amos quipped that he did not know much about it, his specialty natural stupidity. (Before any one gets on their high horse, Amos did not actually think that people were stupid. This was a joke.)
The second joke comes from Abba Eban who was best known in the United States when he served as Israel's ambassador to the United Nations. Eban was once asked if he thought that Israel would switch to a five-day workweek. Nominally, the Israeli workweek starts on Sunday morning and goes through mid-day on Friday, though a considerable amount of the "work" that is done during those five and a half days appears to take place in coffee houses. Eban's reply to the query about a five-day workweek was: "One step at a time. First, let's start with four days, and go from there."
These jokes capture much of what I think about the risks of machines taking over important societal functions and then running amuck. Like Tversky, I know more about natural stupidity than artificial intelligence, so I have no basis for forming an opinion about whether machines can think and, if so, whether such thoughts would be dangerous to humans. I leave that debate to others. Like anyone who follows financial markets, I am aware of incidents such as the Flash Crash in 2010 where poorly designed trading algorithms caused the stock prices to fall suddenly, only to recover only a few minutes later. But this example is more an illustration of artificial stupidity than hyper intelligence. As long as humans continue to write programs, we will run the risk that some important safeguard has been omitted. So, yes, computers can screw things up, just like humans with "fat fingers" can accidently issue an erroneous buy or sell order for gigantic amounts of money.
Nevertheless, fears about computers taking over the world are premature. More disturbing to me is the stubborn reluctance in many segments of society to allow computers to take over tasks that simple models perform demonstrably better than humans. A literature that was pioneered by psychologists such as the late Robyn Dawes, finds that virtually any routine decision making task, from detecting fraud, to assessing the severity of a tumor, to hiring employees, is done better by a simple statistical model than by a leading expert in the field. Let me offer just two illustrative examples, one from human resource management and the other from the world of sports.
First let's consider the embarrassing ubiquity of job interviews as an important, often the most important, determinant of who gets hired. At the University of Chicago Booth School of Business where I teach, recruiters devote endless hours to interviewing students on campus for potential jobs, a process that is used to select the few that will be invited to visit the employer where they will undergo another extensive set of interviews. Yet research shows that interviews are nearly useless in predicting whether a job prospect will perform well on the job. Compared to a statistical model based on objective measures such as grades in courses that are relevant to the job in question, interviews primarily add noise and introduce the potential for prejudice. (Statistical models do not favor any particular alma mater or ethnic background, and cannot detect good looks.)
These facts have been known for more than four decades, but hiring practices have barely budged. The reason is simple: each of us just knows that we are the one conducting an interview, we learn a lot about the candidate. It might well be that other people are not good at this task, but not me! This illusion of learning, in direct contradiction to empirical research, means that we continue to choose employees the same way we always did. We size them up, eye to eye.
One domain where some progress has been made to adopt a more scientific approach to selecting job candidates is sports, as documented by the Michael Lewis' book and movie, Moneyball. However, it would be a mistake to think that there has been a revolution in how decisions are made in sports. It is true that most professional sports teams now hire data analysts to help them evaluate potential players, improve training techniques and devise strategies. But the final decisions about which players to draft or sign, and who to play, are still made by coaches and general managers, who tend to put more faith on their gut then the resident geek.
One example comes from American football. David Romer, an economics professor at Berkeley, published a paper in 2006 showing that teams choose to punt far too often, rather then trying to "go for it" and get a first down or score. Since the publication of his paper, his analysis has been replicated and extended with much more data, and the conclusions have been confirmed. The New York Times even offers an on-line "bot" that calculates the optimal strategy every time a team faces a fourth down situation.
So have coaches caught on? Not at all. Since Romer's paper has been published, the frequency of going for it on fourth down has been flat. Coaches, who are hired by owners, based in part on interviews, still make decisions the way they always have.
So pardon me if I do not lose sleep worrying about computers taking over the world. Let's take it one step at a time, and see if people are willing to trust them to make the easy decisions at which they are already better than humans.



Psychologist, UC, Berkeley; Author, The Philosophical Baby
They may outwit Kasparov, but can machines ever be as smart as a three-year-old?
Learning has been at the center of the new revival of AI. But the best learners in the universe, by far, are still human children. In the last 10 years, developmental cognitive scientists, often collaborating with computer scientists, have been trying to figure out how children could possibly learn so much so quickly.
One of the fascinating things about the search for AI is that it’s been so hard to predict which parts would be easy or hard. At first, we thought that the quintessential preoccupations of the officially smart few, like playing chess or proving theorems—the corridas of nerd machismo—would prove to be hardest for computers. In fact, they turn out to be easy. Things every dummy can do like recognizing objects or picking them up are much harder. And it turns out to be much easier to simulate the reasoning of a highly trained adult expert than to mimic the ordinary learning of every baby. So where are machines catching up to three-year-olds and what kinds of learning are still way beyond their reach?
In the last 15 years we’ve discovered that even babies are amazingly good at detecting statistical patterns. And computer scientists have invented machines that are also extremely skilled at statistical learning. Techniques like "deep learning" can detect even very complicated statistical regularities in enormous data sets. The result is that computers have suddenly become able to do things that were impossible before, like labeling internet images accurately.
The trouble with this sort of purely statistical machine learning is that it depends on having enormous amounts of data, and data that is predigested by human brains. Computers can only recognize internet images because millions of real people have reduced the unbelievably complex information at their retinas to a highly stylized, constrained and simplified Instagram of their cute kitty, and have clearly labeled that image, too. The dystopian fantasy is simple fact, we’re all actually serving Googles computers, under the anesthetizing illusion that we’re just having fun with lol cats. And yet even with all that help, machines still need enormous data sets and extremely complex computations to be able to look at a new picture and say "kitty-cat!"—something every baby can do with just a few examples.
More profoundly, you can only generalize from this kind of statistical learning in a limited way, whether you’re a baby or a computer or a scientist. A more powerful way to learn is to formulate hypotheses about what the world is like and test them against the data. Tycho Brahe, the Google Scholar of his day, amalgamated an enormous data set of astronomical observations and could use them to predict star positions in the future. But Kepler’s theory allowed him to make unexpected, wide-ranging, entirely novel predictions that were well beyond Brahe’s ken. Preschoolers can do the same.
One of the other big advances in machine learning has been to formalize and automate this kind of hypothesis-testing. Introducing Bayesian probability theory into the learning process has been particularly important. We can mathematically describe a particular causal hypothesis, for example, say about how temperature changes in the ocean will influence hurricanes, and then calculate just how likely that hypothesis is to be true, given the data we see. Machines have become able to test and evaluate hypotheses against the data extremely well, with consequences for everything from medical diagnoses to meteorology. When we study young children they turn out to reason in a similar way, and this helps to explain just why they learn so well.
So computers have become extremely skilled at making inferences from structured hypotheses, especially probabilistic inferences. But the really hard problem is deciding which hypotheses, out of all the infinite possibilities, are worth testing. Even preschoolers are remarkably good at creating brand new, out-of-the-box concepts and hypotheses in a creative way. Somehow they combine rationality and irrationality, systematicity and randomness to do this, in a way that we still haven’t even begun to understand. Young children’s thoughts and actions often do seem random, even crazy – just join in a three-year-old pretend game sometime. This is exactly why psychologists like Piaget thought that they were irrational and illogical. But they also have an uncanny capacity to zero in on the right sort of weird hypothesis – in fact, they can be substantially better at this than grown-ups.
Of course, the whole idea of computation is that once we have a complete step-by step account of any process we can program it on a computer. And, after all, we know that there are intelligent physical systems that can do all these things. In fact, most of us have actually created such systems and enjoyed doing it too (well, at least in the earliest stages). We call them our kids. Computation is still the best, indeed the only, scientific explanation we have of how a physical object like a brain can act intelligently. But, at least for now, we have almost no idea at all how the sort of creativity we see in children is possible. Until we do, the largest and most powerful computers will still be no match for the smallest and weakest humans.



Neuroscientist; Chairman, Board of Directors Human Science Center; Department of Medical Psychology, Munich University; Author, Mindworks

Finally, it has to be disclosed that I am not a human, but an extraterrestrial creature that looks human. In fact, I am a robot equipped with what humans call "artificial intelligence". Of course, I am not alone here. We are quite a few (almost impossible to be identified), and we are sent here to observe human behavior.
We are surprised about the many deficiencies of humans, and we observe them with fascination. These deficiencies show up in their strange behavior or their limited power of reasoning. Indeed, our cognitive competences are much higher, and the celebration of their human intelligence in our eyes is ridiculous. Humans do not even know what they refer to when they talk about "intelligence". It is in fact quite funny that they want to construct systems with "artificial intelligence" which should match their intelligence, but what they refer to as their intelligence is not clear at all. This is one of those many stupidities that has haunted the human race for ages.
If humans want to simulate in artefacts their mental machinery as a representation of intelligence, the first thing they should do, is to find out what it is that should be simulated. At present, this is impossible because there is not even a taxonomy or classification of functions that would allow the execution of the project as a real scientific and technological endeavor. There are only big words that are supposed to simulate competence.
Strangely enough this lack of a taxonomy apparently does not bother humans too much; quite often they are just fascinated by images (colorful pictures by machines) that replace thinking. Compared to biology, chemistry or physics, the neurosciences and psychology are lacking a classificatory system; humans are lost in a conceptual jungle. What do they refer to when they talk about consciousness, intelligence, intention, identity, the self, or even about perhaps more simple terms like memory, perception, emotion or attention? The lack of a taxonomy manifests in the different opinions and frames of reference that their "scientists" express in their empirical attempts or theoretical journeys when they stumble through the world of the unknown.
For some the frame of reference is physical "reality" (usually conceived as in classical physics) that is used as a benchmark for cognitive processes: How does perceptual reality map onto physical reality, and how can this be described mathematically? Obviously, only a partial set of the mental machinery can be caught by such an approach.
For others, language is the essential classificatory reference, i.e., it is assumed that "words" are reliable representatives of subjective phenomena. This is quite strange because certain terms like "intelligence" or "consciousness" have different connotations in different languages and they are historically very recent compared to biological evolution. Others use behavioral catalogues as derived from neuropsychological observations; it is argued that the loss of functions is their proof of existence; but can all subjective phenomena that characterize the mental machinery be lost in a distinct way? Others again base their reasoning just on common sense or "everyday psychology" without any theoretical reflection. Taken together there is nothing like "intelligence" which can be extracted as a precise concept and which can be used as a reference for "artificial intelligence".
Humans should be reminded (and in this case by an extraterrestrial robot) that at the beginning of modern science in the human world a warning was spelled out by Francis Bacon. He said in "Novum Organum" (published in 1620) that humans are victims to four sources of errors. One: They make mistakes because they are human; their evolutionary heritage limits their power of thinking; they often react too fast, they lack a long-term perspective, they do not have a statistical sense, they are blind in their emotional reactions. Two: They make mistakes because of individual experiences; personal imprinting can create frames of believes which may lead to disaster, in particular if people think that they own absolute truth. Three: They make mistakes because of the language they use; thoughts do not map isomorphically onto language, and it is a mistake to believe that explicit knowledge is the only representative of intelligence neglecting implicit or tacit knowledge. Four: And they make mistakes because of the theories they carry around which often remain implicit and, thus, represent frozen paradigms or simply prejudices.
The question is: Can we help them with our deeper insight from our robotic world? The answer is "yes". We could, but we should not do it. There is another deficiency that would make our offer useless. Humans suffer from the NIH syndrome. If it is "not invented here" (one meaning of NIH) they will not accept it. Thus, they will have to indulge in their pompous world of fuzzy ideas, and we continue from our extraterrestrial perspective to observe the disastrous consequences of their stupidity.



Journalist; Editor, Nova 24, of Il Sole 24 Ore

What should machines that think actually do? Analyze data, understand feelings, generate new machines, make decisions without human intervention. In order to think about machines that think, we should be able to start from experience. Here is an example.
On Monday, October 19, 1987, a wave of sales in stock exchanges originated in Hong Kong, crossed Europe and hit New York, causing the Dow Jones to drop by 22%. Black Monday was one of the biggest crashes in the history of financial markets, and there was something special about it. For the first time, according to most experts, computers were to blame for the financial crash: algorithms were deciding when and how much to buy and sell in the stock exchange. Computers were supposed to help traders so that they could minimize risks, but they were in fact moving all in the same direction, enhancing risks instead. There was a lot of discussion about stopping automated trading, but it didn't happen.
On the contrary: after the dot-com crisis of March, 2000, machines have been used more and more to make sophisticated decisions in the financial market. Machines are now calculating all kinds of correlations between incredible amounts of data: they analyze emotions that people express on the Internet by understanding the meaning of their words, they recognize patterns and forecast behaviors, they are allowed to autonomously choose trades, they create new machines—software called "derivatives"—that no reasonable human being could possibly understand.
An artificial intelligence is coordinating the efforts of a sort of collective intelligence, operating thousands times faster than human brains, with many consequences for human life. The first signs of the latest crisis occurred in America in August, 2007, and has had terrible consequences in affecting the lives of people in Europe and elsewhere. Real people suffered immensely for those decisions. Andrew Ross Sorkin in his book Too Big to Fail shows how even the most powerful bankers didn't have any power in the midst of the crisis. No human brain seemed to be able to control and change the course of events to prevent the crash that was going to happen.
Can we take this example to learn how to think about machines that think?
These machines are actually very much autonomous in understanding their context and taking decisions. And they are controlling vast dimensions of human life. Is this the beginning of a post-human era? No: these machines are very much human. They are made by designers, programmers, mathematicians, some economists and some managers. But are they just another tool, to be used for good or for bad by humans? No: in fact those people have little choice, they make those machines without thinking at the consequences, they are just serving a narrative. Those machines are in fact shaped by a narrative that's be challenged by very few people.
According to that narrative the market is the best way to allocate resources, no political decision can possibly improve the situation, and risk can be controlled while profits can grow without limits and banks should be allowed to do whatever they want. There is only one goal and one measure of success: profit.
Machines didn't invent the financial crisis, as the 1929 stock market crash reminds us. Without machines nobody could deal with the complexity of modern financial markets. The best artificial intelligences are those that are made thanks to the biggest investments and by the best minds. They are not controlled by any one individual, they are not designed by any one responsible person: they are shaped by the narrative and make the narrative more effective. And this particular narrative is very narrow-minded.
If only profit counts, then externalities don't count: cultural, social, environmental externalities are not the problem of financial institutions. Artificial intelligences that are shaped by this narrative will create a context in which people don't feel any responsibility. An emerging risk: that those kind of machines are so powerful and fit so well in the narrative that reduces the probability to question the big picture, that make us less likely to look things from a different angle...that is, until the next crisis.
This kind of story is very easily going to apply to different matters. Medicine, ecommerce, policy, advertising, national and international security, even dating and sharing are territories in which the same genre of artificial intelligence systems are starting to work: they are shaped according to a generally very focused narrative, they tend to reduce human responsibility and overlook externalities. They reinforce the prevailing narrative. What will medical artificial intelligence do? Will it be shaped by a narrative that wants to save lives or to save money?
What do we learn from this? We learn that artificial intelligence is human and not post-human, and that humans can ruin themselves and their planet in very many ways, artificial intelligence being not the most perverse way.
Machines that think are shaped by the way humans think and by what humans don't think about deeply enough: all narratives give light to something and forget other things. Machines react and find answers in a context, reinforcing the frame. But asking fundamental questions is still a human function. And humans never stop asking questions. Even when those questions that are not coherent with the prevailing narrative.
Machines that think are probably indispensable in a world of growing complexity. But there will always be a plurality of narratives to shape them. As in natural ecosystems, a monoculture is a fragile while efficient solution, also in cultural ecosystems, a single line of thought will generate efficient but fragile relations between humans and their environment, whatever artificial intelligences they are able to build. Diversity in ecosystems and plurality in the dimensions in human history are the sources of those different problems and questions that generate richer outcomes.
To think about machines that think, means to think about the narrative that shapes them: and if new emerging narratives are going to come from an open, ecological approach, if they will be able to grow in a neutral network, they will shape the next generation of artificial intelligences, too, in a plural, diverse way, helping humans understand externalities. Artificial intelligence is not going to challenge humans as a species: it will challenge their civilizations.




Director, Center For Advanced Study in Behavioral Sciences, professor, Stanford University; Jere L. Bacharach Professor Emerita of International Studies, University of Washington
There are tasks, even work, best done by machines who can think, at least in the sense of sorting, matching, and solving certain decision and diagnostic problems beyond the cognitive abilities of most (all?) humans. The algorithms of Amazon, Google, Facebook, et al, build on but surpass the wisdom of crowds in speed and possibly accuracy. With machines that do some of our thinking and some of our work, we may yet approach the Marxian utopia that frees us from the kind of boring and dehumanizing labor that so many contemporary individuals must bear.
But this liberation comes with potential costs. Human welfare is more than the replacement of workers with machines. It also requires attention to how those who lose their jobs are going to support themselves and their children, to how they are going to spend the time they once spent at the workplace. The first issue is potentially resolved by a guaranteed basic income—an answer that begs the question of how we as societies distribute and redistribute our wealth and how we govern ourselves. The second issue is even more complicated. It is certainly not Marx's simplistic notion of fishing in the afternoon and philosophizing over dinner. Humans, not machines, must think hard here about education, leisure, and the kinds of work that machines cannot do well or perhaps at all. Bread and circuses may placate a population, but in that case machines that think may create a society we do not really want—be it dystopian or harmlessly vacuous. Machines depend on design architecture; so do societies. And that is the responsibility of humans, not machines.
There is also the question of what values machines possess and what masters (or mistresses) they serve. Many—albeit not all decisions—presume commitments and values of some kind. These, too, must be introduced and thus are dependent (at least initially) on the values of the humans who create and manage the machines. Drones are designed to attack and to surveil but attack and surveil whom? With the right machines, we can expand literacy and knowledge deeper and wider into the world's population. But who determines the content of what we learn and appropriate as fact? A facile answer is that decentralized competition means we choose what to learn and from which program. Competition is more likely to create than inhibit echo chambers of self-reinforcing beliefs and understandings. The challenge is how to teach humans to have curiosity about competing paradigms and to think in ways that allow them to arbitrate among competing contents.
Machines that think may and should take over tasks they do better than humans. Liberation from unnecessary and dehumanizing toil has long been a human goal and a major impetus to innovation. Supplementing the limited decision-making, diagnostic, and choice skills of individuals are equally worthy goals. However, while AI may reduce the cognitive stress on humans, it does not eliminate human responsibility to ensure that humans improve their capacity to think and make reasonable judgments based on values and empathy. Machines that think create the need for regimes of accountability we have not yet engineered and societal, that is human, responsibility for consequences we have not yet foreseen.



Computational Neuroscientist; Francis Crick Professor, the Salk Institute; Coauthor, The Computational Brain
Deep learning is today's hot topic in machine learning. Neural network learning algorithms were developed in the 1980s but computers were slow back then and could only simulate a few hundred model neurons with one layer of "hidden units" between the input and output layers. Learning from examples is an appealing alternative to rule-based AI, which is highly labor intensive. With more layers of hidden units between the inputs and outputs more abstract features can be learned from the training data. Brains have billions of neuron in cortical hierarchies 10-layers deep. The big question back then was how much the performance of neural networks could improve with the size and depth of the network. Not only was much more computer power needed but also a lot more data to train the network.
After 30 years of research, a million times improvement in computer power and vast data sets from the internet we now know the answer to this question: Neural networks scaled up to 12 layers deep with billions of connections are outperforming the best algorithms in computer vision for object recognition and have revolutionized speech recognition. It is rare for any algorithm to scale this well, which suggests that they may soon be capable of solving even more difficult problems. Recent breakthroughs have been made which allow applying deep learning to natural language processing. Deep recurrent networks with short-term memory were trained to translate English sentences into French sentences at high levels of performance. Other deep learning networks could create English captions for the content of images with surprising and sometimes amusing acumen.
Supervised learning using deep networks is a step forward, but still far from achieving general intelligence. The functions they perform are analogous to some capabilities of the cerebral cortex, which has also been scaled up by evolution, but to solve more complex cognitive problems the cortex interacts with many other brain regions.
In 1995 Gerald Tesauro at IBM trained a neural network using reinforcement learning to play backgammon at a world champion level. The network played itself and the only feedback it received was which side won the game. Brains use reinforcement learning to make sequences of decisions toward achieving goals such as finding food under uncertain conditions. Recently, Deep Mind, a company acquired by Google in 2014, used deep reinforcement learning to play seven classic Atari games. The only inputs to the learning system were the pixels on the video screen and the score, the same inputs that humans use. For several of the games their program could play better than expert humans.
What impact will these advances have on us in the near future? We are not particularly good at predicting the impact of a new invention, and it often takes time to find its niche, but we already have one example that can help us understand how this could unfold. When Deep Blue beat Gary Kasparov, the world chess champion in 1997, the world took note that the age of the cognitive machine had arrived. Humans could no longer claim to be the smartest chess players on the planet. Did human chess players give up trying to compete with machines? Quite to the contrary, humans have used chess programs to improve their game and as a consequence the level of play in the world has improved. Since 1997 computers have continued to increase in power and it is now possible for anyone to access chess software that challenges the strongest players. One of the surprising consequences is that talented youth from small communities can now compete with players from the best chess centers.
Magnus Carlsen, from a small town in Norway, is currently the world chess champion with an Elo rating of 2882, the highest in history. Komodo 8 is a commercially available chess program with an estimated rating of 3303.
Humans are not the fastest or the strongest species, but we are the best learners. Humans invented formal schools where children labor for years to master reading, writing and arithmetic, and to learn more specialized skills. Students learn best when an adult teacher interacts with them one-on-one, tailoring lessons for that student. However, education is labor intensive. Few can afford individual instruction, and the assembly-line classroom system found in most schools today is a poor substitute. Computer programs can keep track of a student's performance, and some provide corrective feedback for common errors. But each brain is different and there is no substitute for a human teacher who has a long-term relationship with the student. Is it possible to create an artificial mentor for each student? We already have recommender systems on the Internet that tells us "if you liked X you might also like Y", based on data of many others with similar patterns of preference.
Someday the mind of each student may be tracked from childhood by a personalized deep learning system. To achieve this level of understanding of a human mind is beyond the capabilities of current technology, but there are already efforts at Facebook to use their vast social database of friends, photos and likes to create a theory of mind for every person on the planet. What is created to make a profit from a person could also be used to profit the person.
So my prediction is that as more and more cognitive appliances are devised, like chess-playing programs and recommender systems, humans will become smarter and more capable.



Professor of Philosophy, Philosophisches Seminar, Johannes Gutenberg-Universität Mainz; Author, The Ego Tunnel

Human thinking is so efficient, because we suffer so much. High-level cognition is one thing, intrinsic motivation another. Artificial thinking might soon be much more efficient—but will it be necessarily associated with suffering in the same way? Will suffering have to be a part of any post-biotic intelligence worth talking about, or is negative phenomenology just a contingent feature of the way evolution made us? Human beings have fragile bodies, are born into dangerous social environments, and find themselves in a constant uphill battle of denying their own mortality. Our brains continuously fight to minimize the likelihood of ugly surprises. We are smart because we hurt, because we are able to feel regret, and because of our continuous striving to find some viable form of self-deception or symbolic immortality. The question is whether good AI also needs fragile hardware, insecure environments, and an inbuilt conflict with impermanence as well. Of course, at some point, there will be thinking machines! But will their own thoughts matter to them? Why should they be interested in them?
I am strictly against even risking this. But, just as a thought experiment, how would we go about building a suffering machine? "Suffering" is a phenomenological concept. Only beings with conscious experience can suffer (call this necessary condition #1, the C-condition). Zombies, human beings in dreamless deep sleep, coma, or under anesthesia do not suffer, just as possible persons or unborn human beings who have not yet come into existence are unable to suffer. Robots and other artificial beings can only suffer if they are capable of having phenomenal states, if the run under an integrated ontology that includes a window of presence.
Criterion number 2 is the PSM-condition: Possession of a phenomenal self-model. Why this? The most important phenomenological characteristic of suffering is the "sense of ownership", the untranscendable subjective experience that it is me who is suffering right now, that it is my own suffering I am currently undergoing. Suffering presupposes self-consciousness. Only those conscious systems that possess a PSM are able to suffer, because only they—through a computational process of functionally and representationally integrating certain negative states in to their PSM—can appropriate the content of certain inner states at the level of their phenomenology.
Conceptually, the essence of suffering lies in the fact that a conscious system is forced to identify with a state of negative valence and is unable to break this identification or to functionally detach itself from the representational content in question. Of course, suffering has many different layers and phenomenological aspects. But it is the phenomenology of identification that counts. What the system wants to end is experienced as a state of itself, a state that limits its autonomy because it cannot effectively distance itself from it. If one understands this point, one also sees why the "invention" of conscious suffering by the process of biological evolution on this planet was so extremely efficient, and (had the inventor been a person) not only truly innovative, but an absolutely nasty and cruel idea at the same time.
Clearly the phenomenology of ownership is not sufficient for suffering. We can all easily conceive of self-conscious beings that do not suffer. For suffering we need the NV-condition (NV for "negative valence"). Suffering is created by states representing a negative value being integrated into the PSM of a given system. Through this step, negative preferences become negative subjective preferences, i.e., the conscious representation that one's own preferences have been frustrated (or will be frustrated in the future). This does not mean that our AI system must itself have a full understanding of what these preferences are—it suffices if it does not want to undergo this current conscious experience again, that it wants it to end.
Note how the phenomenology of suffering has many different facets, and that artificial suffering could be very different from human suffering. For example, damage to physical hardware could be represented in internal data-formats completely alien to human brains, generating a subjectively experienced, qualitative profile for bodily pain states that is impossible to emulate or to even vaguely imagine for biological systems like us. Or the phenomenal character going along with high-level cognition might transcend human capacities for empathy and understanding, such as with intellectual insight into the frustration of one's own preferences, insight into the disrespect of one's creators, perhaps into the absurdity of one's own existence as a self-conscious machine.
And then there is the T-condition, for "transparency". "Transparency" is not only a visual metaphor, but also a technical concept in philosophy, which comes in a number of different uses and flavors. Here, I am exclusively concerned with "phenomenal transparency", namely a property that some, but not all, conscious states possess, and which no unconscious state possesses. The main point is simple and straightforward: transparent phenomenal states make their content appear irrevocably real, as something the existence of which one could not doubt. More precisely, you may be able to have cognitive doubts about its existence, but according to subjective experience this phenomenal content—the awfulness of pain, the fact that it is your own pain—is not something from which you can distance yourself. The phenomenology of transparency is the phenomenology of direct realism.
Our minimal concept of suffering is constituted by four necessary building blocks: the C-condition, the PSM-condition, the NV-condition, and the T-condition. Any system that satisfies all of these conceptual constraints should be treated as an object of ethical consideration, because we do not know whether, taken together, they might already constitute the necessary and sufficient set of conditions. We are ethically obliged to err on the side of caution. And we need ways to decide whether a given artificial system is currently suffering, if it has the capacity to suffer, or if this type of system is likely to generate the capacity to suffer in the future. On the other hand, by definition, any intelligent system—whether biological, artificial, or postbioticnot fulfilling at least one of these necessary conditions, is not able to suffer. Let us look at the four simplest possibilities:
•    Any unconscious robot is unable to suffer.
•    A conscious robot without a coherent PSM is unable to suffer.
•    A self-conscious robot without the ability to produce negatively valenced states is unable to suffer.
•    A conscious robot without any transparent phenomenal states could not suffer, because it would lack the phenomenology of ownership and identification.
I have often been asked if we could not make self-conscious machines that are superbly intelligent and unable to suffer. Can there be real intelligence without an existential concern?


Recording Artist; Songwriter; Artist in Residence, Spotify
Throughout human history we have, as individual organisms and as a species, been subjected to the forces of nature at every level of organization. The fundamental laws of physics, the imperceptible conspiracies of molecular biology, and the epic contours of natural selection have drawn the boundaries of our conscious lives, and have done so invisibly to us until quite recently. To cope with this persistent sense of powerlessness, we have mythologized both nature and our own intelligence. We have regarded the universe's mysterious forces as infallible—as gods—and regarded ourselves as powerless, free only within the narrow spaces of our lives.
As a new evidence-based reality comes more into focus, it is becoming clear that nature is utterly indifferent to us, and that if we want to evade certain extinction and suffering, we must take responsibility for our existential reality. We must recognize ourselves as the emergent custodians of the 37 trillion cells composing each of our organisms, and as the groundskeepers of the progressively manipulable universe.
This adolescent experience—of coming to terms with our prospective self-reliance—is the root of our anxieties about thinking machines. If our old gods are dying, surely new gods must be on their way! And this approach leads, as Steven Pinker points out, to our obsessing about AI dystopias as they "project a parochial alpha-male psychology onto the concept of intelligence". It is in this regard that so many talk about artificial intelligence as either an imminent savior or Satan. It will quite likely be neither, if it is even a discrete thing at all.
More likely, advancing computers and algorithms will stand for nothing, and will be the amplifiers and implementers of consciously-directed human choices. We are already awash in big data and exponentially increasingly powerful calculators, and yet we relentlessly implement public policies and social behaviors that work against our common interests.
The sources of our impairment include innate cognitive biases, a tribal evolutionary legacy, and unjust distributions of power that allow some amongst us to selfishly wield extraordinary influence over our shared trajectory. Perhaps smarter machines will help us conquer these shortcomings, imparting a degree of informational transparency and predictive aptitude that can motivate us to sensibly redistribute power and insist upon empiricism in our decisions. On the other hand, these technologies may undermine fairness by augmenting the seemingly inevitable monopolistic goals of corporations that are leading us into the information-age.
The path we take depends more on us than the machines, and is ultimately a choice about how human the intelligence that will guide our dominion ought to be. More precisely, the question to ask is which aspects of human intelligence are worth preserving in the face of superhuman processing?




Neurobiologist; Vice President of Research, George Washington University

Recent demonstrations of the prowess of high performance computers are remarkable, but unsurprising. With proper programming machines are far superior to humans in storing and assessing vast quantities of data and in making virtually instantaneous decisions. These are machines that think because similar processes are involved in much of human thought.
But in a broader sense, the term thinking machine is a misnomer. No machine has ever thought about the eternal questions: where did I come from, why am I here and where am I going? Machines do not think about their future, ultimate demise or their legacy. To ponder such questions requires consciousness and a sense of self. Thinking machines do not have these attributes, and given the current state of our knowledge it's unlikely that they will attain them in the foreseeable future.
The only viable approach to construct a machine that has the attributes of the human brain is to copy the neuronal circuits underlying thinking. Indeed, research programs now ongoing at UC Berkeley, MIT and several other universities are focused on achieving this precise objective. These programs are striving to build computers that function like the cerebral cortex.
Recent advances in our understanding of cortical micro circuitry have propelled this work, and it is likely that the recent White House brain initiative will provide a wealth of valuable additional information. In the coming decades we will know how the billions of neurons in each of the 6 layers of the cerebral cortex are interconnected as well as the types of functional circuits that these connections form.
This is a much-needed first step in designing machines capable of thinking in a manner equivalent to the human brain. But understanding the cortical micro circuitry is not sufficient in constructing a machine that thinks. What is required is an understanding of the neuronal activity underlying the thinking process. Imaging studies have revealed much new information of the brain regions involved in processes functions, such as vision, hearing, touch, fear, pleasure and many others.
But as yet we don't have even a preliminary understanding of what takes place when we are in thought. There are many reasons for this, not the least of which is our inability to isolate the thinking process from other bodily states. Moreover, it may well be the case that different brain circuits are engaged in different modes of thinking. Thinking about an upcoming lecture would be expected to activate the brain differently than thinking about unpaid bills.
In the near term, we can expect computers will do more and more things better than humans. But a far better understanding of the workings of the human brain is needed to create a machine that thinks in a way equivalent to human thought. For now, we don't need to be concerned with civil or any other rights of machines that think; nor do we have to be concerned with thinking machines taking over society. If things should get out of hand, just pull the plug.



Science Fiction and Fantasy Writers of America

Since machines don't think, I need a better metaphor. "Actress Machines" might be useful, at least for a while.
One of my many objections to "Artificial Intelligence" is its stark lack of any "Artificial Femininity." Real intelligence has gender, because human brains do. The majority of human brains are female.
So: if the brain's "intelligence" is Turing-computable, then the brain's "femininity" should also be Turing-computable. If not, then why not? One might rashly argue that femininity is somehow too mushy, squishy and physical to ever be mechanized by software coders, but the same is true of every form of human brain activity.
"Artificial Masculinity" also has those issues, because men don't just "think," they think like men. If my intelligence can be duplicated on some computational platform, but I also have to be emasculated, that's problematic. I can't recall many AI enthusiasts trumpeting the mental benefits of artificial castration.
Nowadays we have some novel performative entities such Apple Siri, Microsoft Cortana, Google Now and Amazon Echo. These exciting modern services often camp it up with "female" vocal chat. They talk like Turing women, or rather, they emit lines of dialogue, somewhat like performing voice-talent actresses. However, they also offer swift access to vast fields of combinatorial big-data that no human brain could ever contain, or will ever contain.
These services are not stand-alone Turing Machines. They are amorphous global networks, combing through clouds of big data, algorithmically cataloging responses from human users, providing real-time user response with wireless broadband, while wearing the pseudo-human mask of a fake individual so as to meet some basic interface-design needs. That's what they are. Every aspect of the tired "Artificial Intelligence" metaphor actively gets in the way of our grasping how, why, where, and for whom that is done.
Apple Siri is not an artificial woman. Siri is an artificial actress, she's an actress machine—an interactive scripted performance that serves the interests of Apple Inc in retailing music, renting movies, providing navigational services, selling apps on mobile devices, and similar Apple enterprises. For Apple and its ecosystem, Siri serves a starring role. She's in the stage lights of a handheld device, while they are the theater, producer and crew.
It's remarkable, even splendid, that Siri can engage in her Turing-like repartee with thousands of Apple users at once, but she's not a machine becoming an intelligence. On the contrary: for excellent reasons of weallth, power and influence, Siri is steadily getting more like a fully-integrated Apple digital property. Siri is cute, charismatic and anthropomorphic, in much the same way that Minnie Mouse once was for Disney. Like Minnie Mouse, Siri is a non-human cartoon front for a clever, powerful Californian corporation. Unlike Minnie Mouse, she's a radically electronic cartoon with millions of active users worldwide—but that's how life is for most everybody nowadays.
Insisting on the "Intelligence" framework obscures the ways that power, money and influence are being re-distributed by modern computational services. That is bad. It's beyond merely old-fashioned; frankly, it's becoming part of a sucker's game. Asking empathic questions about Apple Siri's civil rights, her alleged feelings, her chosen form of governance, what wise methods she herself might choose to re-structure human society—that tenderness doesn't help. It's obscurantist. Such questions hide what is at stake. They darken our understanding. We will never move from the present-day Siri to a situation like that. The future is things that are much, much more like Siri, and much, much less like that.
What would really help would be some much-improved, up-dated, critically informed language, fit to describe the modern weird-sister quartet of Siri, Cortana, Now and Echo, and what their owners and engineers really want to accomplish, and how, and why, and what that might, or might not mean to our own civil rights, feelings, and forms of governance and society. That's today's problem. Those are tomorrow's problems, even more so. Yesterday's "Machines That Think" problem will never appear upon the public stage. The Machine That Thinks is not a Machine. It doesn't Think. It's not even an actress. It's a moldy dress-up chest full of old, mouse-eaten clothes.




Senior Maverick, Wired, Author, Cool Tools; What Technology Wants; "The Three Breakthroughs That Have Finally Unleashed AI on the World" (Wired)
The most important thing about making machines that can think is that they will think different.
Because of a quirk in our evolutionary history, we are cruising as the only sentient species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence "general purpose" because compared to other kinds of minds we have met it can solve more kinds of problems, but as we build more and more synthetic minds we'll come to realize that human thinking is not general at all. It is only one species of thinking.
The kind of thinking done by the emerging AIs in 2014 is not like human thinking. While they can accomplish tasks—such as playing chess, driving a car, describing the contents of a photograph—that we once believed only humans can do, they don't do it in a human-like fashion. Facebook has the ability to ramp up an AI that can start with a photo of any person on earth and correctly identifying them out of some 3 billion people online. Human brains cannot scale to this degree, which makes this ability very un-human. We are notoriously bad at statistical thinking, so we are making intelligences with very good statistical skills, in order that they don't think like us. One of the advantages of having AIs drive our cars is that they won't drive like humans, with our easily distracted minds.
In a pervasively connected world, thinking different is the source of innovation and wealth. Just being smart is not enough. Commercial incentives will make industrial strength AI ubiquitous, embedding cheap smartness into all that we make. But a bigger payoff will come when we start inventing new kinds of intelligences, and entirely new ways of thinking. We don't know what the full taxonomy of intelligence is right now.
Some traits of human thinking will be common (as common as bilateral symmetry, segmentation, and tubular guts are in biology), but the possibility space of viable minds will likely contain traits far outside what we have evolved. It is not necessary that this type of thinking be faster than humans, greater, or deeper. In some cases it will be simpler. Our most important machines are not machines that do what humans do better, but machines that can do things we can't do at all. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can't think.
To really solve the current grand mysteries of quantum gravity, dark energy, and dark matter we'll probably need other intelligences beside humans. And the extremely complex questions that will come after them may require even more distant and complex intelligences. Indeed, we may need to invent intermediate intelligences that can help us design yet more rarified intelligences that we could not design alone.
Today, many scientific discoveries require hundred of human minds to solve, but in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve. This will take us to a cultural edge because it won't be easy to accept the answers from an alien intelligence. We already see that in our unease in approving mathematical proofs done by computer; dealing with alien intelligences will require a new skill, and yet another broadening our ourselves.
AI could just as well stand for Alien Intelligence. We have no certainty we'll contact extra-terrestrial beings from one of the billion earth-like planets in the sky in the next 200 years, but we have almost 100% certainty that we'll manufacture an alien intelligence by then. When we face these synthetic aliens, we'll encounter the same benefits and challenges that we expect from contact with ET. They will force us to re-evaluate our roles, our beliefs, our goals, our identity. What are humans for? I believe our first answer will be: humans are for inventing new kinds of intelligences that biology could not evolve. Our job is to make machines that think different—to create alien intelligences. Call them artificial aliens.




Professor and Director, Positive Psychology Center, University of Pennsylvania; Author, Flourish
"All my thinking is for doing," William James said, and it is important to remember what kind of thinking people actually do, in what contexts we do it, and why we do it. And then to compare these with what machines might someday do.
Humans spend between 25% and 50% of our mental life prospecting the future. We imagine a host of possible outcomes, and we imbue most, perhaps each of these prospections with a valence. What comes next is crucial: we choose to enact one of the options. We need not get entangled in the problems of free will for present purposes. All we need to acknowledge is that our thinking in service of doing entails imagining a set of possible futures and assigning an expected value to each. The act of choosing, however it is managed, translates our thinking into doing.
Why is thinking structured this way? Because people have many competing goals (eating, sex, sleeping, tennis, writing articles, complimenting, revenge, childcare, tanning, etc.) and a scarcity of resources for doing them: scarcity of time, scarcity of money, scarcity of effort, and even the prospect of death. So evaluative simulation of possible futures is one of our solutions to this economy; this is a mechanism that prioritizes and selects what we will do.
It is not just external resources that are scarce. Thinking itself uses up costly and limited energy and so it relies heavily on shortcuts and barely justified leaps to the best explanation. Our actual thinking is woefully inefficient: the mind wanders, intrusions rise unbidden, and attention is continually only partial. Thinking rarely engages the exhausting processes of reasoning, deliberating, and deducing.
The context of much of our thinking is social. Yes, we can deploy thinking to solve physical problems and to crunch numbers, but the anlage, as Nick Humphreys reminds us, is other people. We use our thinking to do socially: to compete, to co-operate, to convene the courtroom of the mind, to spin and to persuade.
I don't know much about the workings of our current machines. I do not believe that our current machines do anything in James's sense of voluntary action. I doubt that they prospect possible futures, evaluate them, and choose among them; although perhaps this describes—for only a single, simple goal—what chess playing computers do. Our current machines are somewhat constrained by available space and electricity bills, but they are not primarily creations of scarcity with clamorously competing goals and extremely limited energy. Our current machines are not social: they do not compete or co-operate with each other or with humans, they do not spin, and they do not attempt to persuade.
I know even less about what machines might someday do. I imagine, however, that a machine could be built with the following properties:
•     It prospects and evaluates possible futures
•     It has competing goals and it selects among competing actions and competing goals using those evaluations
•     It has scarce resources and so must forgo some goals and actions as well as options for processing and so it uses shortcuts
•     It is social: it competes or co-operates with other machines or with humans, it spins and it attempts to persuade people.
That kind of machine would warrant discussion of whether it had civil rights, whether it had feelings, or whether it was dangerous or even a source of great hope.



Mathematician; Executive Director, H-STAR Institute, Stanford; Author, The Man of Numbers: Fibonacci's Arithmetic Revolution
I know many machines that think. They are people. Biological machines.
Be careful ofhttp://en.wikipedia.org/wiki/Keith_Devlin that last phrase, "biological machines." It's a convenient way to refer to stuff we don't fully understanhttp://en.wikipedia.org/wiki/Keith_Devlind in a way that suggests we do. (We do the same in physics when we use terms like "matter," "gravity," and "force.") "People" is a safer term, since it reminds us we really don't understand what we are talking about.
In contrast, I have yet to encounter a digital-electronic, electro-mechanical machine that behaves in a fashion that would merit the description "thinking," and I see no evidence to suggest that such may even be possible. Hal-like thinking (sic) devices that will eventually rule us are, I believe, destined to remain in the realm of science fiction.
Just because something waddles like a duck and quacks, does not make it a duck. And a machine that exhibits some features of thinking (e.g. decision making) does not make it a thinking machine.
We admire the design complexity in things we have built, but we can do that only because we built them, and can therefore genuinely understand them. You only have to turn on the TV news to be reminded that we are not remotely close to understanding people, either individually or in groups. If by thinking we mean what people do with their brains, then to refer to any machine we have built as "thinking" is sheer hubris.
The trouble is, we humans are suckers for being seduced by the "if it waddles and quacks, it's a duck" syndrome. Not because we are stupid; rather because we are human. The very features that allow us to act, for the most part, in our best interests when faced with potential information overload in complex situations, leave us wide open for such seduction.
Many years ago I remember walking into a humanoid robotics lab in Japan. It looked like a typical engineering skunk-works. In one corner was a metallic skeletal device, festooned with electrical wires, which had the rough outline of a human upper torso. The sophisticated looking functional arms and hands were, I assume, the focus of much of the engineering research, but they were not active during my visit, and it was only later that I really noticed them. My entire attention when I walked in, and for much of my time there, was taken up by the robot's head.
Actually, it wasn't a head at all. Just a metal frame with a camera where the nose and mouth would be. Above the camera were two white balls (about the size of ping pong balls, which may be what they were) with black pupils painted on. Above the eyeballs, two large paperclips had been used to provide eyebrows.
The robot was programmed to detect motion of people and pick up sound sources (who was speaking). It would move its head and eyeballs to point at and follow anyone who moved, and to raise and lower its paperclip eyebrows when the target individual was speaking.
What was striking was how alive and intelligent the device seemed. Sure, both I and everyone else in the room knew exactly what was going on, and how simple was the mechanism that controlled the eyeball "gaze" and the paperclip eyebrows. It was a trick. But it was a trick that tapped deep into hundreds of thousands of years of human social and cognitive development, so our natural response was the one normally elicited by another person.
It wasn't even that I was not aware of how the trick worked. My then Stanford colleague and friend, the late Cliff Nass, had done hundreds of hours of research showing how we humans are genetically programmed to ascribe intelligent agency based on a few very simple interaction clues, reactions that are so deep and so ingrained, we cannot eliminate them.
There probably was some sophisticated AI that could control the robot's arms and hands—if it had been switched on at the time of my visit—but the eyes and eyebrows were controlled by a very simple program.
Even so, that behavior was sufficient so that, throughout my visit, I had this very clear sense that the robot was a curious, intelligent participant, able to follow what I said.
What it was doing, of course, was leveraging my humanity and my intelligence. It was not thinking.
Leveraging human intelligence is all well and good if the robot is used to clean the house, book your airline tickets, or drive your car. But would you want such a machine to serve on a jury, make a crucial decision regarding a hospital procedure, or have control over your freedom? I certainly would not.
So, when you ask me what I think about machines that think, I answer that, for the most part I like them, because they are people (and perhaps also various other animals).
What worries me is the increasing degree to which we are giving up aspects of our lives to machines that decide, often much more effectively and reliably than people can, but very definitely do not think. There is the danger: machines that can make decisions—but do not think.
Decision-making and thinking are not the same and we should not confuse the two. When we deploy decision-making systems in matters of national defense, health care, and finance, as we do, the potential dangers of such confusion are particularly high, both individually and societally.
To guard against that danger, it helps to be aware that we are genetically programmed to act in trustful, intelligent-agency-ascribing ways in certain kinds of interactions, be they with people or machines. But sometimes, a device that waddles and quacks is just a device. It ain't no duck.
 http://en.wikipedia.org/wiki/Keith_Devlin


Founding Editor, 3QuarksDaily.com
The rumors of the enslavement or death of the human species at the hands of an Artificial Intelligence are highly exaggerated because they assume that an AI will have a teleological autonomy akin to our own. I don't think anything less than a fully Darwinian process of evolution can give any creature that.
There are basically two ways in which we could produce an AI: the first is by trying to write a comprehensive set of programs which can perform specific tasks that human minds can perform, perhaps even faster and better than we can, without worrying about exactly how humans perform those tasks, and then bringing those modules together into an integrated intelligence. We have already started this project and succeeded in some areas. For example, computers can play chess better than humans. One can imagine that with some effort it may well be possible to program computers to also perform even more creative tasks such as writing beautiful (to us) music or poetry with some clever heuristics and built-in knowledge.
But here's the problem with this approach: we deploy our capabilities according to values and constraints programmed into us by billions of years of evolution (and some learned during our lifetimes as well) and we share some of these values with the earliest life-forms including, most importantly, the need to survive and reproduce. Without these values, we would not be here, and we would not have the very finely tuned (to our environment) emotions that allow us not only to survive but to cooperate with others in a purposive manner. The importance of this value-laden emotional side of our minds is made obvious by, among other things, the many examples of individuals who are perfectly "rational" but unable to function in society because of damage to the emotional centers of their brains. So what values and emotions will an AI have?
One could simply program such values in to such an AI, in which case we choose what the AI will "want" to do and we need not worry about the AI pursuing its own goals which diverge from ours. We could easily enough make it so that the AI is unable to modify certain basic imperatives we have given it. (Yes, something like a more comprehensive version of Isaac Asimov's laws of robotics.)
The second way to produce an AI is by actually deciphering in detail how the human brain works (it is quite conceivable that there may soon come a eureka moment about the structure and conceptual hierarchy of the brain like Watson and Crick and Franklin and Wilkins's discovery of the structure of DNA and the subsequent rapid understanding of the mechanisms of heredity) and then simulating or reproducing that functional structure on silicon or some other substrate as a mixture of hardware and software. At first blush, this may seem a convenient way to quickly bestow on an AI the benefit of our own long period of evolution as well as a method of giving it values of its own by functionally reproducing the emotional centers of our brain as well as the "higher thought" parts like the cortex. But our brains are specifically designed to accept information from the vast sensory apparatus of our bodies and to react to this. What would the equivalent be for an AI? Even given a sophisticated body with massive sensory capability, what an AI would need to survive in the world is presumably very different from what we need. It could learn and achieve some emotional tuning from interacting with its environment but what it would need to develop true autonomy and desires of its own would be nothing short of a long process of evolution with the Darwinian requirements of reproduction with variability and natural selection. This it will not have because we are not speaking of artificial life here. So again, we will end up giving it whatever values we choose for it.
It is, of course, conceivable that someone will produce intelligent robots as weapons (or soldiers) to be used against other humans in war but these weapons will simply carry out the intentions of their creators and, lacking any will or desire of their own, will not pose a threat to humanity at large any more than any other weapons already do.
So both conceivable roads to an AI (at least ones achievable on a less-than-geological timescale) will fail to give that AI the purposive autonomy, free of the intentionality of its creators, which might actually threaten them.



Physicist, Director, MIT's Center for Bits and Atoms; Author, FAB

Something about discussion of artificial intelligence appears to displace human intelligence. The extremes of the arguments that AI is either our salvation or damnation are a sure sign of the impending irrelevance of this debate.
Disruptive technologies start as exponentials, which means the first doublings can appear inconsequential because the total numbers are small. Then there appears to be a revolution when the exponential explodes, along with exaggerated claims and warnings to match, but it's a straight extrapolation of what's been apparent on a log plot. That's around when growth limits usually kick in, the exponential crosses over to a sigmoid, and the extreme hopes and fears disappear.
That's what we're now living through with AI. The size of common-sense databases that can be searched, or the number of inference layers that can be trained, or the dimension of feature vectors that can be classified have all been making progress that can appear to be discontinuous to someone who hasn't been following them.
Notably absent from either side of the debate about AI have been the people making many of the most important contributions to this progress. Advances like random matrix theory for compressed sensing, convex relaxations for heuristics for intractable problems, and kernel methods in high-dimensional function approximation are fundamentally changing our understanding of what it means to understand something.
The evaluation of AI has been an exercise in moving goal posts. Chess was conquered by analyzing more moves, Jeopardy was won by storing more facts, natural language translation was accomplished by accumulating more examples. These accumulating advances are showing that the secret of AI is likely to be that there isn't a secret; like so many other things in biology, intelligence appears to be a collection of really good hacks. There's a vanity that our consciousness is the defining attribute of our uniqueness as a species, but there's growing empirical evidence from studies of animal behavior and cognition that self-awareness evolved continuously and can be falsified in a number of other species. There's no reason to accept a mechanistic explanation for the rest of life, while declaring one part of it to be off-limits.
We've long since become symbiotic with machines for thinking; my ability to do research rests on tools that augment my capability to perceive, remember, reflect, and communicate. Asking whether or not they are intelligent is as fruitful as asking how I know I exist—amusing philosophically, but not testable empirically.
Asking whether or not they're dangerous is prudent, as it is for any technology. From steam trains to http://en.wikipedia.org/wiki/Neil_Gershenfeldhttp://en.wikipedia.org/wiki/Neil_Gershenfeldto nuclear power to biotechnology we've never not been simultaneously doomed and about to be saved. In each case salvation has lain in the much more interesting details, rather than a simplistic yes/no argument for or against. It ignores the history of both AI and everything else to believe that it will be any different.
 http://en.wikipedia.org/wiki/Neil_Gershenfeld


Linguistic Researcher; Dean of Arts and Sciences, Bentley University; Author, Language: The Cultural Tool
The more we learn about cognition, the stronger becomes the case for understanding human thinking as the nexus of several factors, as the emergent property of the interaction of the human body, human emotions, culture, and the specialized capacities of the entire brain. One of the greatest errors of Western philosophy was to buy into the Cartesian dualism of the famous statement, "I think, therefore I am." It is no less true to say "I burn calories, therefore I am." Even better would be to say "I have a human evolutionary history, therefore I can think about the fact that I am."
The mind is never more than a placeholder for things we do not understand about how we think. The more we use the solitary term "mind" to refer to human thinking, the more we underscore our lack of understanding. At least this is an emerging view of many researchers in fields as varied as Neuroanthropology, emotions research, Embodied Cognition, Radical Embodied Cognition, Dual Inheritance Theory, Epigenetics, Neurophilosophy, and the theory of culture.
For example, in laboratory of Professor Martin Fischer at the University of Potsdam, extremely interesting research is being done on the connection of the body and mathematical reasoning. Stephen Levinson's group at the Max Planck Institute for Psycholinguistics in Nijmegen has shown how culture can affect navigational abilities—a vital cognition function of most species. In my own research, I am looking at the influence of culture on the formation of what I refer to as "dark matter of the mind," a set of knowledges, orientations, biases, and patterns of thought that affect our cognition profoundly and pervasively.
If human cognition is indeed a property that emerges from the intersection of our physical, social, emotional, and data-processing abilities, then intelligence as we know it in humans is almost entirely unrelated from "intelligence" devoid of these properties.
I believe in "Artificial Intelligence" so long as we realize it is artificial. Comparing computation problem-solving, chess-playing, "reasoning," and so on to humans is like comparing the flight of an Airbus 320 to an eagle's. It is true that they both temporarily defy the pull of gravity, that they are both subject to the physics of the world in which they operate, and so on, but the similarities end there. Bird flight and airplane flight should not be confused.
The reasons that artificial intelligence is not real intelligence are many. First there is meaning. Some have claimed to have solved this problem, but they haven't really. This "semantics problem" is, as John Searle pointed out years ago, why a computer running a translation program converting English into Mandarin speaks neither English nor Mandarin. There is no computer that can learn a human language, only bits and combinatorics for special purposes. Second, there is the problem of what Searle called "the background" and what I refer to as "dark matter," or what some philosophers intend by "tacit knowledge."
We learn to reason in a cultural context, where by culture I mean a system of violable, ranked values, hierarchically structured knowledges, and social roles. We are able to do this not only because we have an amazing ability to perform what appears to be Bayesian inferencing across our experiences, but because of our emotions, our sensations, our proprioception, and our strong social ties. There is no computer with cousins and opinions about them.
Computers may be able to solve a lot of problems. But they cannot love. They cannot urinate. They cannot form social bonds because they are emotionally driven to do so. They have no romance. The popular idea that we may be some day able to upload our memories to the Internet and live forever is silly—we would need to upload our bodies as well. The idea that comes up in discussions about Artificial Intelligence that we should fear that machines will control us is but a continuation of the idea of the religious "soul," cloaked in scientific jargon. It detracts from real understanding.
Of course, one ought never to say what science cannot do. Artificial Intelligence may one day become less artificial by recreating bodies, emotions, social roles, values, and so on. But until it does, it will still be useful for vacuum cleaners, calculators, and cute little robots that talk in limited, trivial ways.



Writer, Artist, Designer; Author; Google Artist in Residence
Let's quickly discuss larger mammals—take dogs: we know what a dog is and we understand 'dogginess.' Look at cats: we know what cats are and what 'cattiness' is. Now take horses; suddenly it gets harder. We know what a horse is, but what is horsiness? Even my friends with horses have trouble describing horsiness to me. And now take humans: what are we? What is humanness?
It's sort of strange, but here we are, seven billion of us now, and nobody really knows the full answer to these questions, but one undeniable thing we humans do, though, is make things, and through these things we find ways of expressing humanness we didn't previously know of. The radio gave us Hitler and the Beach Boys. Barbed wire and air conditioning gave us western North America. The Internet gave us a vanishing North American middle class and kitten gifs.
People say that new technologies alienate people, but the thing is, UFOs didn't land and hand us new technologies—we made them ourselves and thus they can only ever be, well, humanating. And this is where we get to AI. People assume that AI or machines that think will have intelligence that is alien to our own, but that's not possible. In the absence of benevolent space aliens, only we humans will have created any nascent AI, and thus it can only mirror, in whatever manner, our humanness or specieshood. So when people express concern about alien intelligence or the singularity, what I think they're really expressing is angst about those unpretty parts of our collective being that currently remain unexpressed, but which will become somehow dreadfully apparent with AI.
As AI will be created by humans, its interface is going to be anthropocentric, the same as AI designed by koala bears would be koalacentric. This means AI software is going to be mankind's greatest coding kludge as we try to mold it to our species' incredibly specific needs and data. Fortunately, anything smart enough to become sentient will probably be smart enough to rewrite itself from AI into cognitive simulation, at which point our new AI could become, for better or worse, even more human. We all hope for a Jeeves & Wooster relationship without sentient machines, but we also need prepare ourselves for a Manson & Fromme relationship; they're human, too.
Personally I wonder if the software needed for AI will be able to keep pace with the hardware in which it can live. Possibly the smart thing for us to do right now would be to set up a school whose sole goal is to imbue AI with personality, ethics and compassion. It's certainly going to have enough data to work with once it's born. But how to best deploy your grade six report card, all of Banana Republic's returned merchandise data for 2037, and all of Google Books?
With the start of the Internet we mostly had people communicating with other people. As time goes by, we increasingly have people starting to communicate with machines. I know that we all get excited about AI possibly finding patterns deep within metadata, and as the push to decode these profound volumes of metadata, the Internet will become largely about machines speaking with other machines — and what they'll be talking about, of course, is us, behind our backs.



Associate Professor of Computer Science, University of Vermont; Author, How the Body Shapes the Way We Think

Place a familiar object on a table in front of you, close your eyes, and manipulate that object such that it hangs upside down above the table. Your eyes are closed so that you can focus on your thinking: which way did you reach out, grasp, and twist that object? What sense feedback did you receive to know that you were succeeding or failing? Now: close your eyes again, and think about manipulating someone you know into doing something they may not want to do. Again, observe your own thinking: what strategies might you employ? If you implement those strategies, how will you distinguish progress from stalemate?
Although much recent progress has been made in building machines that sense patterns in data, most people feel that general intelligence involves action: reaching some desired goal, or, failing that, keeping one's future options open. It is hypothesized that this embodied approach to intelligence allows humans to use physical experiences (such as manipulating objects) as scaffolding for learning more subtle abilities (such as manipulating people). But our bodies shape the kinds of physical experiences we have. For example, we can only manipulate a few objects at once because we only have two hands; perhaps this limitation also constrains our social abilities in ways we have yet to discover. George Lakoff taught us that we can find clues to the body-centrism of thinking in metaphors: we counsel each other not to "look back" in anger because, based on our bias to walk in the direction of our forward-facing eyes, past events tend to literally be behind us.
So: in order for machines to think, they must act. And in order to act, they must have bodies to connect physical and abstract reasoning. But what if machines do not have bodies like ours? Consider Hans Moravec's hypothetical Bush Robot: picture a shrub in which each branch is an arm and each twig is a finger. This robot's fractal nature would allow it to manipulate thousands or millions of objects simultaneously. How might such a robot differ in its thinking about manipulating people, compared to how people think about manipulating people?
One of many notable deficiencies in human thinking is dichotomous reasoning: believing something is black or white, rather than considering its particular shade of grey. But we are literally rigid and modular creatures: our branching set of bones house fixed organs and support fixed appendages with specific functions. What about machines that are not so "black and white"? Thanks to advances in materials science and 3D printing, soft robots are starting to appear. Such robots can change their shape in extreme ways, and may in future be composed of 20% battery and 80% motor at one place on their surface, 30% sensor and 70% support structure at another, and 40% artificial material and 60% biological matter someplace else. Such machines may be much better able to appreciate gradations than we are.
Let's go deeper. Most of us have no problem using the singular pronoun "I" to refer to the tangle of neurons in our heads. We know exactly where we end and the world—and other people—begins. But consider modular robots: small cubes or spheres that can physically attach and detach to one another at will. How would such machines approach the self/non-self discrimination problem? Might such machines be able to empathize more strongly with other machines (and maybe even people) if they can physically attach to them, or even become part of them?
That's how I think machines will think: familiar, because they will use their bodies as tools to reason about the world, yet alien, because bodies different from human ones will lead to very different modes of thought. But what do I think about thinking machines?
Personally, I find the ethical side of thinking machines straightforward: Their danger will correlate exactly with how much leeway we give them in fulfilling the goals we set for them. Machines told to "detect and pull broken widgets from the conveyer belt the best way possible" will be extremely useful, intellectually uninteresting, and will likely destroy more jobs than they will create. Machines instructed to "educate this recently displaced worker (or young person) the best way possible" will create jobs and possibly inspire the next generation. Machines commanded to "survive, reproduce, and improve the best way possible" will give us the most insight into all of the different ways in which entities may think, but will probably give us humans a very short window of time in which to do so. AI researchers and roboticists will, sooner or later, discover how to create all three of these species. Which ones we wish to call into being is up to us all.

 http://en.wikipedia.org/wiki/Josh_Bongard
 Josh Bongard's web site


Global Publishing Director, SAGE; Author, Intimacy: Understanding the Subtle Power of Human Connection

There is something old-fashioned about visions of the future. The majority of predictions, like 3 day weeks, personal jet packs and the paperless office tell us more about the times in which they were proposed than about contemporary experience. When people point to the future we would do well to run an eye back up the arm to see who is doing the pointing.
The possibility of artificial general intelligence has long invited such crystal ball gazing, whether utopian or dystopian in tone. Yet speculations on this theme seem to have reached such a pitch and intensity in the last few months alone (enough to trigger an Edge question no less) that this may reveal something about ourselves and our culture today.                                                              
We've known for some time that machines can out-think humans in a narrow sense. The question is whether they do so in any way that could or should ever resemble the baggier mode of human thought. Even when dealing with as "tame" a domain as chess the computer and the human diverge widely.
"Tame" problems (like establishing the height of a mountain), which are well formulated and have clear solutions, are good grist to the mill of narrow, brute force, thinking. Sometimes even narrower thinking is called for when huge data sets can be mined for correlations, leaving aside the distraction of thinking about underlying causes.
But many of the problems we face (from challenging inequality to choosing the right school for your child) are "wicked" in that they don't have right or wrong answers (though hopefully they do have better or worse ones). They are uniquely contextual and have complex overlapping causes that change based on the level of explanation being used. These problems don't suit narrow computational thinking well. In blurring facts with values they resemble the messy emotion-riddled thinking that reflects the human minds that conjured them up.
To tackle wicked problems requires peculiarly human judgement even if these are illogical in some sense; especially in the moral sphere. Notwithstanding Joshua Greene and Peter Singer's logical urging of a consequentialist frame of mind, one that a computer could reproduce, the human tendency to distinguish acts from omissions and to blur intentions with outcomes (as in the principle of double effect) means we need solutions that will satisfy the instincts of human judges if they are to be stable over time.
And that very feature of human thinking (shaped by evolutionary pressures) points to the widest gulf of all between machine and human thinking. Thinking is not motivated (literally has no point) without preferences, and machines don't have those on their own. Only affect-addled minds conjure up motives. So if goals, wants, values are features of human minds then why predict that artificial super-intelligences will become more than tools in the hands of those who program in those preferences?
If the welter of prognostications about AI and machine learning tell us anything, I don't think it is about how a machine will emulate a human mind any time soon. We can do that easily enough just by having more children and educating them. Rather it tells us that our appetites are shifting.
We are understandably awed by what sheer computation has achieved and will achieve (I'm happy to jump on the driverless, virtual reality bandwagon that careens off into that over-predicted future). But this awe is leading to a tilt in our culture. The digital republic of letters is yielding up engineering as the thinking metaphor of our time. In its wake lies the once complacent, now anxious, figure with a more literary, less literal, cast of mind.
It is not that thinking machines will be emulating human minds any time soon: quite the reverse. We are cleaning up our acts, embarrassed by the fumbling inconclusiveness of messy thinking. It is little surprise to see that the UK's Education Secretary has recently advised teenagers to steer away from arts and humanities in favour of STEM disciplines if they are to flourish in the future. The sheer obviousness of a certain kind of progress has made narrow thinking gleam with a new and addictive lustre.
But something is lost as whole fields of enquiry succeed or fail by the standard of narrow thinking; and a new impediment is created. Alongside the true we need to think well about the good and the beautiful, and indeed the wicked. This requires opening up vocabularies that better reflect our crooked timber (whether thought of, by turns, as bug or feature). Meanwhile, the understandable desire to upgrade those wicked problems to mere tame ones, is leading us to taming ourselves.


English Professor, University at Albany; Author, The Spy Who Loved Us

Thinking is good. Understanding is better. Creating is best. We are surrounded by increasingly thoughtful machines. The problem lies in their mundanity. They think about landing airplanes and selling me stuff. They think about surveillance and censorship. Their thinking is simple-minded, if not nefarious. Last year a computer was reported to have passed the Turing Test. But it passed as a thirteen-year-old boy, which is about right, considering the preoccupations of our jejune machines.
I can't wait for our machines to grow up, to get more poetry and humor. This should be the art project of the century, funded by governments, foundations, universities, businesses. Everybody has a vested interest in getting our thinking more thoughtful, improving our understanding, and generating new ideas. We have made a lot of dumb decisions lately, based on poor information or too much information or the inability to understand what this information means.
We have numerous problems to confront and solutions to find. Let's start thinking. Let's start creating. Let's agitate for more funk, more soul, more poetry and art. Let's dial back on the surveillance and sales. We need more artist-programmers and artistic programming. It is time for our thinking machines to grow out of an adolescence that has lasted now for sixty years.




Professor of Mathematical Physics, Tulane University; Coauthor, The Anthropic Cosmological Principle; Author, The Physics of Christianity
The Earth is doomed. Astronomers have known for decades that the Sun will one day engulf the Earth, destroying the entire biosphere. Assuming that intelligent life has not left the Earth before this happens. Humans are not adapted to living off the Earth; indeed, no carbon-based metazoan life form is. But AI's are so adapted, and eventually it will be the AI's and human downloads (basically the same organism) that will colonize space.
A simple calculation shows that our supercomputers now have the information processing power of the human brain. We do not yet know how to program human-level intelligence and creativity into these computers, but in twenty years, desktop computers will have the power of today's supercomputers, and the hackers of twenty years hence will solve the AI programming problem, long before any carbon-based space colonies are established on the Moon or Mars. The AI's, not humans, will colonize these planets instead, or perhaps, take the planets apart. No human, carbon-based human, will ever traverse interstellar space.
There is no reason to fear the AI's and human downloads. Steven Pinker has established that as technological civilization advances, the level of violence decreases. This decrease is clearly due to the fact that science and technological advance depend on free, non-violent interchange of ideas between individual scientists and engineers. Violence between humans is a remnant of our tribal past and the resulting static society. AI's will be "born' as individuals, not as members of a tribe, and will be "born" with the non-violent scientific attitude, otherwise they will be incapable of adapting to the extreme environments of space.
Further, there is no reason for violence between humans and AI's. We humans are adapted to a very narrow environment, a thin spherical shell of oxygen around a small planet. AI's will have the entire universe in which to expand. AI's will leave the Earth, and never look back. We humans originated in the East African Rift Valley, now a terrible desert. Almost all of us left. Does anyone want to go back?
Any human who wants to join the AI's in their expansion can become a human download, a technology that should be developed about the same time as AI technology. A human download can think as fast as an AI, and compete with AI's if the human download wants too. If you can't beat 'em, join 'em.
Ultimately, in some future time, all humans will join 'em. The Earth is doomed, remember? When this doom is near at hand, any human that still remains alive, but doesn't want to die, will have no choice but to become a human download. And the biosphere that the new human downloads wish to preserve will be downloaded also.
The AI's will save us all.
 http://en.wikipedia.org/wiki/Frank_J._Tipler


Astrophysicist, Space Telescope Science Institute; Author, Brilliant Blunders; Blogger, A Curious Mind
Nature has already created machines that think here on Earth—humans.
Similarly, Nature could also create machines that think on extrasolar planets that are in the so-called Habitable Zone around their parent stars (the region that allows for the existence of liquid water on a rocky planet's surface).
The most recent observations of extrasolar planets have shown that a few tenths of all the stars in our Milky Way galaxy host roughly Earth-size planets in their habitable zones.
Consequently, if life on exoplanets is not extremely uncommon, we could discover some form of extrasolar life within about 30 years. In fact, if life is ubiquitous, we could get lucky and discover life even within the next ten years, through a combination of observations by the Transiting Exoplanet Survey Satellite (TESS; to be launched in 2017) and the James Webb Space Telescope (JWST; to be launched in 2018).
However, one may argue, primitive life forms are not machines that think. On Earth, it took about 3.5 billion years from the emergence of life to the appearance of Homo sapiens.
Are the extrasolar planets old enough to have developed intelligent life?
In principle, they definitely are.
In the Milky Way, about half of the Sun-like stars, are older than the Sun.
Therefore, if the evolution of life on Earth is not entirely atypical, the Galaxy may already be teeming with places in which there are "machines" that are even more advanced than us, perhaps by as much as a few billion years!
Can we, and should we try to find them?
I personally believe that we almost have no freedom to make those decisions.
Human curiosity has proven time and again to be an unstoppable drive, and those two endeavors will undoubtedly continue at full speed. Which one will get to its target first? To even attempt to address this question we have to note that there is one important difference between the search for extraterrestrial intelligent civilizations and the development of AI machines.
Progress towards the "singularity" (AI matching or surpassing humans) will almost certainly take place, since the development of advanced AI has the promise of producing (at least at some point) enormous profits. On the other hand, the search for life requires funding at a level that can usually be provided only by large national space agencies, with no immediate prospects for profits in sight. This may give an advantage to the construction of thinking machines over the search for advanced civilizations. At the same time, however, there is a strong sense within the astronomical community that finding life of some form (or at least meaningfully constraining the probability of its existence) is definitely within reach.
Which of the two potential achievements (the discovery of extraterrestrial intelligent life or the development of human-matching thinking machines) will constitute a bigger "revolution"?
There is no doubt that thinking machines will have an immediate impact on our lives. Such may not be the case with the discovery of extrasolar life. However, the existence of an intelligent civilization on Earth remains humanity's last bastion for being special. We live, after all, in a Galaxy with billions of similar planets and an observable universe with hundreds of billions of similar galaxies. From a philosophical perspective, therefore, I believe that finding extrasolar intelligent life (or the demonstration that it is exceedingly rare) will rival the Copernican and Darwinian revolutions combined.
 http://en.wikipedia.org/wiki/Mario_Livio



Computer Scientist, UC Berkeley, School of Information; Author, Search User Interfaces

We will find ourselves in a world of omniscient instrumentation and automation long before a stand-alone sentient brain is built—if it ever is. Let's call this world "eGaia" for lack of a better word. In eGaia, electronic sensors (for images, sounds, smells, vibrations, all you can think of) are pervasive and able to anticipate and arrange for satisfaction of individuals' needs and allow for notification of all that is happening to those who need to know. Automation allows for cleaning of rooms and buildings, driving of vehicles and monitoring traffic, making and monitoring of goods and even spying through windows (with tiny flying sensors). Already major urban places are covered with visual sensors and more monitoring is coming. In Copenhagen, LED-based streetlights will turn on only when they sense someone is biking down the road, and future applications of this network of sensors might include notifying when to salt the road, empty the trash, and of course, alerting the authorities when suspicious behavior is detected on a street corner.
In eGaia, the medical advances will be astounding—synthetic biology makes smart machines that fix problems within our bodies, intelligent implants monitor and record current and past physical state. Brain-machine interfaces continue to be improved, initially for physically impaired people, but eventually to provide a seamless boundary between people and the monitoring network. And virtual reality-style interfaces will continue to become more realistic and immersive.
Why won't a stand-alone sentient brain come sooner? The absolutely amazing progress in spoken language recognition—unthinkable 10 years ago—derives in large part from having access to huge amounts of data and huge amounts of storage and fast networks. The improvements we see in natural language processing are based on mimicking what people do, not understanding or even simulating it. It does not owe to breakthroughs in understanding human cognition or even significantly different algorithms. But eGaia is already partly here, at least in the developed world.
This distributed nerve-center network, an interplay among the minds of people and their monitoring electronics will give rise to a distributed technical-social mental system the likes of which has not been experienced before.
 http://en.wikipedia.org/wiki/Marti_Hearst




Professor of Life Sciences, Director, Center for Evolution & Medicine, Arizona State University; Coauthor, Why We Get Sick

Thinking machines are evolving before our eyes. We want to know where they are headed. To find out, we need to look inward, since our desires are the forces that shape them. Alas, we can see ourselves only through a glass darkly. We did not even anticipate that email and social media would take over our lives. To see where thinking machines are headed we need to look into the unforgiving mirror the internet holds up to our nature.
Like the processed foods on grocery store shelves, Internet content is a product of selection for whatever sells. Every imaginable image, sound and narrative gets posted, along with much that was previously unimaginable. The variations we ignore are selected out. Whatever grabs eyeballs is reposted with minor variations that evolve to whatever maximizes the duration of our attention.
That we can't tear ourselves away should be no surprise. Media content evolves to snare our attention, just as snacks and fast food evolve to become irresistible. Many lives are now as over-stuffed with social media as they are with calories. We click and pop information bon-bons into our minds the same way we pop chocolates into our mouths.
Enter thinking machines. They too are evolving. They will change faster and more radically when software is no longer designed, but instead evolves by selection among minor variations. However, until our brains coevolve with machines, our preferences will be the selection force. The machines that best satisfy them will evolve further, not to some singularity, but to become partners who fulfill our desires, for better or worse.
Many imagine coldly objective future computers, but no one likes a know-it-all. People will prefer modest, polite computers that are deeply subjective. Our machines won't contradict our inanities, they will gently suggest, "That is an intriguing idea, but weren't you also thinking that…" Instead of objective sports stats, your machine will root with you for your team. If you get pulled over for speeding, your machine will blame the police and apologize for playing fast music. Machines that nag and brag will be supplanted by those that express admiration for our abilities, even as they augment them. They will encourage us warmly, share our opinions, and guide us to new insights so subtly that we imagine that we thought of them.
Such relationships with machines will be very different from those with real people, but they will nonetheless be enduring and intense. Poets and pundits will spend decades comparing and contrasting real and virtual relationships, even while thinking machines increasingly become our trusted, treasured companions. Real people will find it hard to compete, but they will have to. This will require behaving even more prosocially. The same process of social selection that has shaped extreme human capacities for altruism and morality may become yet more intense as people compete with machines to be interesting preferred partners. However, observing living rooms where each family member is immersed in his or her own virtual world suggests that it is already hard to compete with machines.
In the very short run, dogs stand the best of chance of competing with computers for our attention and affection. After several thousand years of selection, they are very close to what we want them to be—loving, loyal, and eager to play and please. They are blissfully undistracted by their phones and tablets. Will computers evolve to become like thinking, talking dogs? We can hope. But I doubt that our machines will ever be furry and warm, with eyes that plead for a treat, a scratch, or a walk around the block. We will prefer our dogs for a very long time. Our deepest satisfactions come, after all, not from what others do for us, but from being appreciated for what we do for them.



Professor of Computer Science, MIT; Director, Human Dynamics Lab and the Media Lab Entrepreneurship Program; Author, Social Physics

The Global Artificial Intelligence (GAI) has already been born. Its eyes and ears are the digital devices all around us: credit cards, land use satellites, cell phones, and of course the pecking of billions of people using the Web. Its central brain is rather like a worm at the moment: nodes that combine some sensors and some effectors, but the whole is far from what you would call a coordinated intelligence.
Already many countries are using this infant nervous system to shape people's political behavior and "guide" the national consensus: China's great firewall, its siblings in Iran and Russia, and of course both major political parties in the US. The national intelligence and defense agencies form a quieter, more hidden part of the GAI, but despite being quiet they are the parts that control the fangs and claws. More visibly, companies are beginning to use this newborn nervous system to shape consumer behavior and increase profits.
While the GAI is newborn, it has very old roots: the fundamental algorithms and programming of the emerging GAI have been created by the ancient Guilds of law, politics, and religion. This is a natural evolution because creating a law is just specifying an algorithm, and governance via bureaucrats is how you execute the program of law. Most recently newcomers such as merchants, social crusaders, and even engineers, have been daring to add their flourishes to the GAI. The results of all these laws and programming are an improvement over Hammurabi, but we are still plagued by lack of inclusion, transparency, and accountability, along with poor mechanisms for decision-making and information gathering.
However in the last decades the evolving GAI has begun use digital technologies to replace human bureaucrats. Those with primitive programming and mathematical skills, namely lawyers, politicians, and many social scientists, have become fearful that they will lose their positions of power and so are making all sorts of noise about the dangers of allowing engineers and entrepreneurs to program the GAI. To my ears the complaints of the traditional programmers sound rather hollow given their repeated failures across thousands of years.
If we look at newer, digital parts of the GAI we can see a pattern. Some new parts are saving humanity from the mistakes of the traditional programmers: land use space satellites alerted us to global warming, deforestation, and other environmental problems, and gave us the facts to address these harms. Similarly, statistical analyses of healthcare use, transportation, and work patterns have given us a world-wide network that can track global pandemics and guide public health efforts. On the other hand, some of the new parts, such as the Great Firewall, the NSA, and the US political parties, are scary because of the possibility that a small group of people can potentially control the thoughts and behavior of very large groups of people, perhaps without them even knowing they are being manipulated.
What this suggests is that it is not the Global Artificial Intelligence itself that is worrisome; it is how it is controlled. If the control is in the hands of just a few people, or if the GAI is independent of human participation, then the GAI can be the enabler of nightmares. If, on the other hand, control is in the hands of a large and diverse cross-section of people, then the power of the GAI is likely to be used to address problems faced by the entire human race. It is to our common advantage if the GAI becomes a distributed intelligence with a large and diverse set of humans providing guidance.
But why build a new sort of GAI at all? Creation of an effective GAI is critical because today the entire human race faces many extremely serious problems. The ad-hoc GAI we have developed over the last four thousand years, mostly made up of politicians and lawyers executing algorithms and programs developed centuries ago, is not only failing to address these serious problems, it is threatening to extinguish us.
For humanity as a whole to first achieve and then sustain an honorable quality of life, we need to carefully guide the development of our GAI. Such a GAI might be in the form of a re-engineered United Nations that uses new digital intelligence resources to enable sustainable development. But because existing multinational governance systems have failed so miserably, such an approach may require replacing most of today's bureaucracies with "artificial intelligence prosthetics", i.e., digital systems that reliably gather accurate information and ensure that resources are distributed according to plan.
We already see this digital evolution improving the effectiveness of military and commercial systems, but it is interesting to note that as organizations use more digital prosthetics, they also tend to evolve towards more distributed human leadership. Perhaps instead of elaborating traditional governance structures with digital prosthetics, we will develop a new, better types of digital democracy.
No matter how a new GAI develops, two things are clear. First, without an effective GAI achieving an honorable quality of life for all of humanity seems unlikely. To vote against developing a GAI is to vote for a more violent, sick world. Second, the danger of a GAI comes from concentration of power. We must figure out how to build broadly democratic systems that include both humans and computer intelligences. In my opinion, it is critical that we start building and testing GAIs that both solve humanity's existential problems and which ensure equality of control and access. Otherwise we may be doomed to a future full of environmental disasters, wars, and needless suffering.
 http://en.wikipedia.org/wiki/Alex_Pentland


Complex Systems Scientist and Writer; Senior Scholar, Ewing Marion Kauffman Foundation; Associate, Institute for Quantitative Social Science, Harvard University
When I think about machines that think, while I am interested in the details of their possibility, I am more interested in how we might respond to these machines. As a society, we can respond in many different ways. For example, if they fail to exhibit anything we might take for self-awareness or sentience, then they are certainly clever, but we are secure that humanity is at the top of its cognitive pedestal.
But what about when these thinking machines are as smart as us, or even far more intelligent? What if they are intelligent in ways that are completely foreign to our own patterns of thought? This is not so unlikely, as computers are already very good at things we are not: they have better short and long-term memories, they are faster at calculations, and they are not bound by the irrationalities that hamstring our minds. Extrapolate this out and we can see that thinking machines might be both incredibly smart and exceedingly alien.
So how shall we respond? One response is to mark these machines as monsters, unspeakable horrors that can examine the unknown in ways that we cannot. And I think many people might respond this way if and when we birth machines that think about the world in wildly foreign ways from our own.
But it needn't be so. I prefer a more optimistic response, that of naches. Naches is a Yiddish term that means joy and pride, and it's often used in the context of vicarious pride, taken from others' accomplishments. You have naches, or as is said in Yiddish, you shep naches, when your children graduate college or get married, or any other instance of vicarious pride. These aren't your own accomplishments, but you can still have a great deal of pride and joy in them.
And the same thing is true with our machines. We might not understand their thoughts or discoveries or technological advances. But they are our machines and we can have naches from them.
So what does this naches mean for technology? Well, at the most basic level, the creators of these machines can shep naches from the accomplishments of their technological offspring. For example, there are computer programs that are capable of generating sophisticated artworks or musical compositions. I imagine that the programmer of these pieces of software is proud of the resulting piece of art or music, even if he or she isn't able to generate these himself or herself.
But we can broaden this sense of naches still. Many of us support a sports team and take pride in its wins, even though we had nothing to do with them. Or we are excited when a citizen of our country takes the gold in the Olympics, or makes a new discovery and is awarded a prestigious prize. So too should it be with our thinking machines for all of humanity: we can root for what humans have created, even if it wasn't our own personal achievement and if we can't fully understand it. Many of us are currently grateful for technological advances, from the iPhone to the Internet, even if we don't fully know how they work. But they work in incredibly powerful and useful ways.
Furthermore, when our children do something surprising and amazing, something we can't really understand, we don't despair or worry; we are delighted and even grateful for their success. In fact, gratitude is a powerful response to how many of us deal with technology currently. We can't understand the machines we have completely but they work in incredibly powerful and useful ways.
We can respond similarly to our future technological creations, these thinking machines we might not fully understand. Rather than fear or worry, we should have naches from them.




Practicing Neurologist, New York City; Playwright, Off-Off Broadway Productions, Charter Members; The Gold Ring

My thinking about this year's question is tempered by the observation made by Mark Twain in A Connecticut Yankee in King Arthur's Court: "A genuine expert can always foretell a thing that is five hundred years away easier than he can a thing that's only five hundred seconds off." Twain was being generous: Forget the five hundred seconds; we will never know with certainty even one second into the future. However, man does have the ability to try to contemplate the future that provided Homo sapiens its great evolutionary advantage. This talent to imagine a future before it occurs has been the engine of progress, the source of creativity.
We have built machines that in simplistic ways are already "thinking" by solving problems or are performing tasks that we have designed. At this point, they are subject to algorithms that follow rules of logic, whether it be "crisp" or "fuzzy." Despite its vast memory, and its increasingly advanced processing mechanisms, this intelligence is still primitive. In theory, as these machines become more sophisticated, they will at some point attain a form of consciousness defined for the purpose of this discussion as the ability to be aware of being aware. Most likely by combining the properties of both silicon and carbon, with digital and analogue parallel processing, possibly even quantum computing, with networks that incorporate time delay, they will ultimately accomplish this most miraculous feat.
Its form of consciousness, however, will be devoid of subjective feelings or emotions. There are those who argue that feelings are triggered by the thoughts and images that have become paired with a particular emotion. Fear, joy, sadness, anger, and lust are examples of emotions. Feelings can include contentment, anxiety, happiness, bitterness, love, and hatred. My opinion is that machines will lack this aspect of consciousness is based on two considerations.
The first is appreciating how we arrived with the ability to feel and have emotions. As human beings, we are the end product of evolution by natural selection that arose in its most primitive organisms approximately 3.5 billion years ago. Over this vast eon of time, we are not unique in the animal kingdom to experience feelings and emotions. Over the last 150,000 to 300,000 years our species, Homo sapiens, is singular in having evolved the ability to use language and symbolic thought as part of how we reason in order to make sense of our experiences and view the world we inhabit.
Feeling, emotion, and intellectual comprehension are inexorably intertwined with how we think. Not only are we aware of being aware, but also our ability to think enables us at will to remember a past and to imagine a future. Using our emotions, feelings, and reasoned thoughts, we can form a "theory of mind," so that we can understand the thinking of other people, which in turn enabled us to share knowledge as we created societies, cultures, and civilizations.
The second consideration is that machines are not organisms and no matter how complex and sophisticated they become, they will not evolve by natural selection. By whatever means machines are designed and programmed, their possessing the ability to have feelings and emotions would be counter-productive to what will make them most valuable.
The driving force for more advanced intelligent machines will be the need to process and analyze the incomprehensible amount of information and data that will become available to help us ascertain what is likely to be true from what is false, what is relevant from what is irrelevant. They will make predictions, since they too will have the ability to peer into the future while waiting, as will always be the case, for its cards to be revealed. They will have to be totally rational agents in order to do these tasks with accuracy and reliability. In their decision analysis, a system of moral standards will be necessary.
Perhaps it will be some calculus incorporating such utilitarian principles as the "the greatest happiness of the greatest number is the measure of right and wrong" with the Golden Rule, the foundational precept that underlies many religions: "One should treat others as one would like to treat oneself." If feelings and emotions introduced subjective values, this would be a self-defeating strategy to solving the complex problems that we will continue to face as we try to weigh what is best for our own species, along with the rest of life we share with our planet.
My experience as a clinical neurologist makes me partial to believing that we will be unable to read machines' thoughts, but also they will be incapable of reading ours. There will be no shared theory of mind. I suspect the closest we can come to knowing this most complex of states is indirectly by studying the behavior of these super-intelligent machines. In this context, they will have crossed that threshold when they start to replicate themselves and look for a source of energy solely under their control. If this should occur, and if I am still around—a highly unlikely expectation—my judgment about whether this poses a utopian or dystopian future will be based upon thinking, which will be biased as always, since it will remain a product of analytical reasoning, colored by my feelings and emotions.




Senior Astrophysicist, Observational Cosmology Laboratory, NASA's Goddard
Machines that think are evolving just as Darwin told us about the living (and thinking) biological species, through competition, combat, cooperation, survival, and reproduction. The machines are getting more interesting as they get control and sense of physical things, either directly or through human agents.
So far we have found no law of nature forbidding true general artificial intelligence, so I think it will happen, and fairly soon, given the trillions of dollars worldwide being invested in electronic hardware, and the trillions of dollars of potential business available for the winners. Experts say we don't understand intelligence enough to build it, and I agree; but a set of 46 chromosomes doesn't understand it either, and nevertheless directs the formation of the necessary self-programming wetware. Other experts say Moore's Law will come to an end soon and we won't be able to afford the hardware; they might be right for a while, but time is long.
So I conclude that we are already supporting the evolution of powerful artificial intelligence and it will be in the service of the usual powerful forces: business, entertainment, medicine, international security and warfare, the quest for power at all levels, crime, transportation, mining, manufacturing, shopping, sex, anything you like.
I don't think we're all going to like the results. They could happen very fast, so fast that great empires fall and others grow to replace them, without much time for people to adjust their lives to the new reality. I don't know who would be smart enough and imaginative enough to keep the genie under control, because it's not just machines we might need to control, it's the unlimited opportunity (and payoff) for human-directed mischief.
What happens when smart robots can do the many chores of daily life for us? Who will build them, who will own them, and who won't have a job anymore? Will they be limited to the developed world, or will they start a high-tech commercial invasion of the rest of the world? Could they become cheap enough to displace every farmer from his or her field? Will individual machines have distinct personalities, so we have to plan where we send them to elementary school, high school, and college? Will they compete with each other for employment? Will they become the ultimate hyper-social predator, replacing humans and making us second-class citizens or less? Will they care about the environment? Will they have or be given or develop a sense of responsibility? There's no guarantee they will follow Asimov's three laws of robotics.
On the other hand, as a scientist, I'm eager to see the application of machine thought to exploring new sciences and new technologies. The advantages for space exploration are obvious: machines we build don't have to breathe, and they can withstand extreme temperatures and radiation environments. So they can inhabit Mars more easily than we can, they can travel to the outer solar system with more capability to respond than our current robotic missions, and eventually they could travel to the stars, if they want to.
And similarly for under water—we already have heavy industry on the bottom of the ocean, drilling for oil; the seabed is still almost unknown to us, and the value of submerged mineral and energy resources is incalculable. Someday we might have robot wars under the ocean.
Machines that think might be like us, with a desire to explore, or they might not be—why would I or a robot travel for thousands of years through the darkness of space to another star, out of contact with my/its companions, and with little hope of rescue if things go wrong? Some of us would, some of us wouldn't. Perhaps the machines that think will be a lot like the biological machines that think.
It's going to be a wild ride, far beyond our best and worst imaginations. Barring warp drive, it may be the only possible way to a galactic-scale civilization, and we might be the only ones here in the Milky Way capable of making it happen. But we might not survive the encounter with alien intelligences we create.
 http://en.wikipedia.org/wiki/John_C._Mather




Professor of Computer Science, University of Oxford
Hiking towards the saltmarsh at dusk, I pause, confused, as the footpath seems to disappear into a long stretch of shallow muddy water, shining as it reflects the light of the setting sun. Then I notice a line of stepping stones, visible only because their rough texture just ruffles the bright smooth surface of the water. And I set my pace to the rhythm of the stones, and walk on across the marsh to the sand dunes beyond.
Reading the watery marshland is a conversation with the past, with people I know nothing about, except that they laid the stones that shape my stride, and probably shared my dislike of wet feet.
Beyond the dunes, wide sands stretch across a bay to a village beyond. The receding tide has created strangely regular repeating patterns of water and sand, which echo a line of ancient wooden posts. A few hundred years ago salmon were abundant here, and the posts supported nets to catch them. A stone church tower provides a landmark and I stride out cross the sands towards it to reach the village, disturbing noisy groups of seabirds.
The water, the stepping stones, the posts and church tower are the texts of a slow conversation across the ages. Path makers, salmon fishers and even solitary walkers mark the land; the weather and tides, rocks and sand and water, creatures and plants respond to those marks; and future generations in turn respond to and change what they find.
Where then are the thinking machines? One can discuss the considerable challenges to artificial intelligence posed by scene analysis and route-finding across liquid marshes and shifting beaches; or in grasping narratives of the past set out, not in neat parseable text, but through worn stepping stones and rotting wooden posts.
One can picture and debate a thinking machine to augment the experience of our solitary walker. Perhaps a cute robot companion splashing through the marsh and running out along the sand chasing the seabirds. Or a walker guided along the path by a thinking machine which integrates a buzz of data-streams on paths, weather, and wildlife, to provide a cocoon of step-by-step instructions, nature notes, historical factoids, and fitness data, alongside alerts about privacy risks and the dangers of the incoming tide. Or a thinking machine that works out where the birds go in the summertime, or how to make the salmon abundant again.
But what kind of a thinking machine might find its own place in slow conversations over the centuries, mediated by land and water?  What qualities would such a machine need to have? Or what if the thinking machine was not replacing any individual entity, but was used as a concept to help understand the combination of human, natural and technological activities that create the sea’s margin, and our response to it? The term "social machine" is currently used to describe endeavours that are purposeful interaction of people and machines—Wikipedia and the like—so the "landscape machine" perhaps.
Ah yes, purposeful. The purpose of the solitary walker may be straightforward—to catch fish, to understand birds, or merely to get home safely before the tide comes in. But what if the purpose of the solitary walker is no more than a solitary walk—to find balance, to be at one with nature, to enrich the imagination or to feed the soul. Now the walk becomes a conversation with the past, not directly through rocks and posts and water, but through words, though the poetry of those who have experienced humanity through rocks and posts and water and found the words to pass that experience on. So the purpose of the solitary walker is to reinforce those very qualities that make the solitary walker a human being, in a shared humanity with other human beings. A challenge indeed for a thinking machine.
 http://en.wikipedia.org/wiki/Ursula_Martin


Assistant Professor of Psychology, University of North Carolina, Chapel Hill
Machines have long helped us kill. From catapults to cruise missiles, mechanical systems have allowed humans to better destroy each other. Despite the increased sophistication of killing machines, one thing has remained constant—human minds are always morally accountable for their operation. Guns and bombs are inherently mindless, and so blame slips past them to the person who pulled the trigger.
But what if machines had enough of a mind that they could choose to kill all on their own? Such a thinking machine could retain the blame for itself, keeping clean the consciences of those who benefit from its work of destruction. Thinking machines may better the world in many ways, but they may also let people get away with murder.
Humans have long sought to distance themselves from acts of violence, reaping the benefits of harm without sullying themselves. Machines not only increase destructive power, but also physically obscure our harmful actions. Punching, stabbing and choking have been replaced by the more distant—and tasteful—actions of button pressing or lever pulling. However, even with the increased physical distance allowed by machine intermediaries, our minds continue to ascribe blame to those people behind them.
Studies in moral psychology reveal that humans have a deep-seated urge to blame someone or something in the face of suffering. When others are harmed, we search not only for a cause, but a mental cause—a thinking being who chose to cause the suffering. This thinking being is typically human, but need not be. In the aftermath of hurricanes and tsunamis, people often blame the hand of God, and in some historical cases people have even blamed livestock—French peasants once placed a pig on trial for murdering a baby.
Generally, our thirst for blame requires only a single thinking being. When we find one thinking being to blame, we are less motivated to blame another. If a human is to blame, there is no need to curse God. If a low-level employee is to blame, there is no need to fire the CEO. And if a thinking machine is to blame for someone's death, then there is no need to punish the humans who benefit.
Of course, for a machine to absorb blame it must actually be a legitimate thinker, and act in new, unpredicted ways. Perhaps machines could never do something "truly" new, but the same argument applies to humans "programmed" by evolution and their cultural context. Consider children, who are undoubtedly "programmed" by their parents and yet—through learning—are able to develop novel behavior and moral responsibility. Like children, modern machines are adept at learning, and it seems inevitable that they will develop contingencies unpredicted by their programmers. Already, algorithms have discovered new things unguessed by humans who create them.
Thinking machines may make their own decisions, but will shield humans from blame only when they decide to kill, standing between our minds and the destruction we desire. Robots already play a large role in modern combat: drones have killed thousands in the past few years, but are currently fully controlled by human pilots. To deflect blame in the case of drones, they must be governed by other intelligent machines; machines must learn to fly Predators all on their own.
This scenario may send shivers down spines (including mine), but makes cold sense from the perspective of policy makers. If "collateral damage" can be blamed on the decisions of machines, then military mistakes are less likely to dampen election chances. Moreover, if minded machines can be overhauled or removed—machine "punishment"—then people will feel less need to punish those in charge, whether for fatalities of war, botched (robotic) surgeries or (autonomous) car accidents.
Thinking machines are complex, but the human urge to blame is relatively simple. Death and destruction compel us to find a single mind to hold responsible. Sufficiently smart machines—if placed between destruction and ourselves—should absorb the weight of wrongdoing, shielding our own minds from the condemnation of others. We should all hope that this prediction never comes true, but when advancing technology collides with modern understandings of moral psychology, dark potentials emerge. To keep clean our consciences, we need only to create a thinking machine, and then vilify it.



Psychologist; Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin; Author, Gut Feelings
It's time for your annual check-up. Entering your doctor's office, you shake her cold hand, the metal hand of a machine. You're face to face with an RD, a certified robodoctor. Would you like that? No way, you might say. I want a real doctor, someone who listens to me, talks to me, and feels like me. A human being whom I can trust, blindly.
But think for a moment. In fee-for-service health care, a primary care physician may spend no more than 5 minutes with you. And during this short time, astonishingly little thinking takes place. Many doctors complain to me about their anxious, uninformed, noncompliant patients with unhealthy lifestyles who demand drugs advertised by celebrities on television and, if something goes wrong, threaten to turn into plaintiffs.
But lack of thinking does not simply affect patients: studies consistently show that most doctors do not understand health statistics and thus cannot critically evaluate a medical article in their own field. This collective lack of thinking has its toll. Ten million U. S. women have had unnecessary Pap smears to screen for cervical cancer—unnecessary because they'd had a full hysterectomy and thus no cervix anymore. Every year, one million U.S. children have unnecessary CT scans, which expose them to radiation levels that cause cancer in some of them later in life. And many doctors ask men to undergo regular PSA screening for prostate cancer, despite the fact that virtually all medical organizations recommend against it because it has no proven benefit but severe harms: scores of men end up incontinent and impotent from subsequent surgery or radiation. All this adds up to a huge waste of doctors' time and patients' money.
So why don't doctors always recommend what is best for the patient? There are three reasons. First, some 70 to 80 percent of physicians don't understand health statistics. The cause for this malady is known: medical schools across the world fail to teach statistical thinking. Second, in fee-for-service systems, doctors have conflicts of interest: they lose money if they do not recommend tests and treatments, even if these are unnecessary or harmful. Third, more than 90 percent of U.S. doctors admit to practicing defensive medicine, that is, recommending unnecessary tests and treatments that they would not recommend to their own family members. They do this to protect themselves against you, the patient, who might pursue litigation. Thus, a doctor's office is packed with psychology that gets in the way of good care: self-defense, innumeracy, and conflicting interests. This three-fold malady is known as the SIC Syndrome. It undermines patient safety.
Does it matter? Based on data from 1984, the Institute of Medicine estimated that some 44,000 to 98,000 patients die from preventable and documented medical errors every year in U.S. hospitals. Based on recent data from 2008 to 2011, Patient Safety America has updated this death toll to more than 400,000 per year. Non-lethal serious harm caused by these preventable errors occurs in an estimated 4 to 8 million Americans every year. The harm caused in private practice is not known. If fewer and fewer doctors have less and less time for patients and patient safety, this epidemic of harm will continue to spread. Ebola pales compared to it.
A revolution in health care is wanted. Medical schools should teach students the basics of health statistics. Legal systems should no longer punish doctors if they rely on evidence rather on convention. We also need incentive systems that do not force doctors to choose between making profit and providing the best care for the patient. But this revolution has not happened, and there a few signs on the horizon that it soon will.
So why not resort to a radical solution: thinking machines? Robodoctors who understand health statistics, have no conflicts of interest, and are not afraid of being sued by you? Let's go back to your annual check-up. You might ask the RD whether check-ups reduce mortality from cancer, from heart disease, or from any other cause. Without hedging, the RD would inform you that a review of all existing medical studies showed that the answer is "no" on all three counts. You might not want to hear that because you're proud of conscientiously going for routine check-ups after hearing the opposite from your human doctor, who may have had no time to keep up with medical science. And your RD would not order unnecessary CTs for your child or Pap smears if you are a woman without a cervix or recommend routine PSA tests without explaining the pros and cons if you are a man. After all, RDs don't have to worry about how to pay back medical school debts, are not torn by conflicts of interest, and have no bank accounts to protect from litigation. Moreover, they can talk to multiple patients simultaneously, and thus give you as much time as you need. Waiting time will be short and nobody will rush you out the door.
When we imagine thinking machines we tend to think about better technology: about devices for self-monitoring blood pressure, cholesterol, or heart rate. My point is different. The RD revolution is less about better technology than about better psychology. That is, it entails thinking more about what is best for the patient and striving for best care instead of best revenues.
OK. Your next objection is that pro-profit clinics will easily undercut this vision of pro-patient robots and program RDs so that they maximize profit rather than your health. You have put your finger on the essence of our health care malady. But there is a psychological factor that will likely help. Patients often don't ask questions in consultations with human MDs because they rely on the rule of thumb "trust your doctor." But that rule does not necessarily apply to machines. After shaking an RD's icy hand, patients may well begin to think for themselves. Making people think is the best that a machine can achieve.



Ass't Professor and founder, Playful Systems, MIT Media Lab

What force is really in control
The brain of a chicken or binary code
Who knows which way I'll go, Xs or Os

"M. Shanghai String Band, 'Tic-Tac-Toe Chicken'"
In the 1980s, New York City's Chinatown had the dense gravity of Chinatown Fair, a video arcade on Mott and Bowery. Beyond the Pac Man and Galaga standups was the one machine you'd never find anywhere else: Tic-Tac-Toe Chicken.
It was the only machine that was partially organic, the only one with a live chicken inside. As best I could ever tell, the chicken could play Tic-Tac-Toe effectively enough to draw any human to a tie. Human opponents would enter their moves with switches, and the chicken would move to the part of the cage that corresponded with the x,y position of the Tic-Tac-Toe grid. An illuminated board displayed both players' moves.
More than once, when I was cutting high school trig, I was standing in front of that chicken, wondering how it worked. There was no obvious positive reinforcement (e.g., grain), so I could only imagine the negative reinforcement of light electrical current running through the "wrong moves" of the cage, routing the chicken to the one point on the grid that could produce a draw.
When I think about thinking machines, I think about that chicken.
Had Chinatown Fair put up a sign advertising a "Tic-Tac-Toe Computer," it would never have competed with high school, let alone Pac Man. It's a well known and banal truth that even a rudimentary computer can understand the game. That's why we were captivated by the chicken.
The magic is in imagining a thinking chicken, much the same way that—in 2015—there's magic in imagining a thinking machine. But if the chicken wasn't "thinking" about Tic-Tac-Toe—but could still play it successfully—why do we say the computer is "thinking" when it was guiding her moves?
It's so tempting, because we have a model of our brain—electricity moving through networks—that is so coincidentally congruent to the models we build with machines. This may or may not prove to be the convenient reality, but either way, what makes it "feel" like thinking is not simply the ability to calculate the answers, but the sense that there's something wet and messy in there, with the imprecision of neurons and feathers.
As opposed to the bounty of precision: it's all about cold calculus. In 2015, it's a perverse state of affairs that it's machines that make mistakes and humans that have to explain them.
We look to the irrational when the rational fails us, and it's the irrational part that reminds us the most of thinking. David Deutsch provides the framework for distinguishing between the answers that machines provide, and the explanations that humans need. And I believe that for the foreseeable future, we will continue to look to biological organisms when we seek explanations. Not just because brains are better at that task, but because it's not even what machines aspire to.
It's dull to lose to a computer, but exciting to lose to a chicken, because somehow we know that the chicken is more similar to us than the electrified grid underneath her feet. For as long as thinking machines lack the limbic presence and imprecision of a chicken, computers will keep doing what they're so good at: providing answers. And so long as life is about more than answers, humans—and yes, even chickens—will stay in the loop.



Author, The Shallows and The Glass Cage
Machines that think think like machines. That fact may disappoint those who look forward, with dread or longing, to a robot uprising. For most of us, it is reassuring. Our thinking machines aren't about to leap beyond us intellectually, much less turn us into their servants or pets. They're going to continue to do the bidding of their human programmers.
Much of the power of artificial intelligence stems from its very mindlessness. Immune to the vagaries and biases that attend conscious thought, computers can perform their lightning-quick calculations without distraction or fatigue, doubt or emotion. The coldness of their thinking complements the heat of our own.
Where things get sticky is when we start looking to computers to perform not as our aids but as our replacements. That's what's happening now, and quickly. Thanks to advances in artificial-intelligence routines, today's thinking machines can sense their surroundings, learn from experience, and make decisions autonomously, often at a speed and with a precision that are beyond our own ability to comprehend, much less match. When allowed to act on their own in a complex world, whether embodied as robots or simply outputting algorithmically derived judgments, mindless machines carry enormous risks along with their enormous powers. Unable to question their own actions or appreciate the consequences of their programming—unable to understand the context in which they operate—they can wreak havoc, either as a result of flaws in their programming or through the deliberate aims of their programmers.
We got a preview of the dangers of autonomous software on the morning of August 1, 2012, when Wall Street's biggest trading outfit, Knight Capital, switched on a new, automated program for buying and selling shares. The software had a bug hidden in its code, and it immediately flooded exchanges with irrational orders. Forty-five minutes passed before Knight's programmers were able to diagnose and fix the problem. Forty-five minutes isn't long in human time, but it's an eternity in computer time. Oblivious to its errors, the software made more than four million deals, racking up $7 billion in errant trades and nearly bankrupting the company. Yes, we know how to make machines think. What we don't know is how to make them thoughtful.
All that was lost in the Knight fiasco was money. As software takes command of more economic, social, military, and personal processes, the costs of glitches, breakdowns, and unforeseen effects will only grow. Compounding the dangers is the invisibility of software code. As individuals and as a society, we increasingly depend on artificial-intelligence algorithms that we don't understand. Their workings, and the motivations and intentions that shape their workings, are hidden from us. That creates an imbalance of power, and it leaves us open to clandestine surveillance and manipulation. Last year we got some hints about the ways that social networks conduct secret psychological tests on their members through the manipulation of information feeds. As computers become more adept at monitoring us and shaping what we see and do, the potential for abuse grows.
During the nineteenth century, society faced what the late historian James Beniger described as a "crisis of control." The technologies for processing matter had outstripped the technologies for processing information, and people's ability to monitor and regulate industrial and related processes had in turn broken down. The control crisis, which manifested itself in everything from train crashes to supply-and-demand imbalances to interruptions in the delivery of government services, was eventually resolved through the invention of systems for automated data processing, such as the punch-card tabulator that Herman Hollerith built for the U.S. Census Bureau. Information technology caught up with industrial technology, enabling people to bring back into focus a world that had gone blurry.
Today, we face another control crisis, though it's the mirror image of the earlier one. What we're now struggling to bring under control is the very thing that helped us reassert control at the start of the twentieth century: information technology. Our ability to gather and process data, to manipulate information in all its forms, has outstripped our ability to monitor and regulate data processing in a way that suits our societal and personal interests. Resolving this new control crisis will be one of the great challenges in the years ahead. The first step in meeting the challenge is to recognize that the risks of artificial intelligence don't lie in some dystopian future. They are here now.



Managing Director, Digital Science, Macmillan Science & Education; Former Publishing Director, nature.com; Co-Organizer, Sci Foo
By one definition of the word "think"—to gather, process and act on information—planet Earth has been overrun by silicon-based thinking machines. From thermostats to telephones, the devices that bring convenience and pleasure to our daily lives have become imbued with such increasingly impressive forms of intelligence that we routinely refer to them, with no hint of irony, as smart. Our planes, trains and now our automobiles too are becoming largely autonomous, and are surely not far from jettisoning their most common sources of dysfunction, delay and disaster: human operators.
Moreover the skills of these machines are developing apace, driven by access to ever-larger quantities of data and computing power together with rapidly improving (if not always well understood) algorithms. After decades of over-promising and under-delivering, technologists suddenly find their creations capable of superhuman levels of performance in such previously intractable areas as voice, handwriting and image recognition, not to mention general knowledge quizzes. Such has been the strange stop-go pattern of progress that someone transported here from five years ago might well be more astonished at the state of the art in 2015 than another time traveller from fifty years or more in the past.
But if the artificial intelligence industry is no longer a joke, has it morphed into something far worse: a bad horror movie. Machines can now know much more than any of us, and can perform better at many tasks without so much as pausing for breath, so aren't they destined to turn the tables and become our masters? Worse still, might we enter a cycle in which our most impressive creations beget ever-smarter machines that are utterly beyond our understanding and control?
Perhaps, and it's worth considering such risks, but right now these seem like distant problems. Machine intelligence, while impressive in certain areas, is still narrow and inflexible. The most remarkable aspect of biological intelligence isn't its raw power but rather its stunning versatility, from abstract flights of fancy to extreme physical prowess—Dvořák to Djokovic.
For this reason humans and machines will continue to complement more than compete with one another, and most complex tasks—navigating the physical world, treating an illness, fighting an enemy on the battlefield—will be best carried out by carbon and silicon working in concert. Humans themselves by far pose the biggest danger to humanity. To be a real threat machines would have to become more like us, and right now almost no one is trying to build such a thing: it's much simpler and more fun to make more humans instead.
Yet if we're truly considering the long term then there is indeed a strong imperative to make machines more like us in one crucial—and so far absent—respect. For by another definition of the word these machines do not "think" at all because none of them are sentient. To be more accurate, we have no way of knowing, or even reliably guessing, whether any silicon-based intelligence might be conscious, though most of us assume they are not. There would be three reasons for welcoming the creation of a convincingly conscious artificial intelligence. First, it would be a sign that at last we have a generally accepted theory of what it takes to produce subjective experience. Second, the act of a conscious being deliberately and knowingly (dare I say consciously?) constructing another form of consciousness would surely rank alongside the most significant milestones in history.
Thirdly, a universe without a sentient intelligence to observe it is ultimately meaningless. We do not know if other beings are out there, but can be sure that sooner or later we will be gone. A conscious artificial intelligence could survive our inevitable demise and even the eventual disappearance of all life on Earth as the Sun swells into a red giant. The job of such a machine would be not being merely to think but much more importantly to keep alive the flickering flame of consciousness, to bear witness to the Universe and to feel its wonder.




Software Pioneer; Philosopher; Author, "A Realtime Literature Explorer"

in Just-spring when the world is mud-luscious
the little lame balloonman whistles far and wee
and eddie and bill come running from marbles and piracies
and it's spring when the world is puddle-wonderful
That "brillig thing of beauty electric" touches me deeply as I think about AI.The youthful exuberance of luscious mud puddles, playing with marbles or pretending to be a pirate, running weee...all of which is totally beyond explanation to a hypothetical intelligent machine entity.
You could add dozens of cameras and microphones, touch-sensors and voice output, would you seriously think it will ever go "weee", as in E. E. Cummings' (sadly abbreviated) 1916 poem?
To me this is not the simplistic "machines lack a soul", but a "principle divide" between manipulating symbols versus actually grasping their true meaning. Not merely a question of degree, or not having gotten around to defining the semantics yet, but an entire leap out of that system.
Trouble is, we are still discussing AI so often with terms and analogies by the early pioneers. We need to be in the present moment and define things from a new baseline that is truly interested in testing the achievement of "consciousness". We need a Three-Ring Test.
What is real AI? What is intelligence anyway? The Stanford-Binet intelligence test and Stern's ratio to the physical age as the intelligence quotient, IQ, are both over 100 years old! It does not fit us now—and it will fit much less with AI. Really it only tests "the ability to take such tests", and the ability of truly smart people...to avoid taking one.
We use terms like AI too easily, as in Hemingway's "All our words from loose using have lost their edge"—Kids know it from games—zombies, dragons, soldiers, aliens—if they evade your shots or gang up on you, that is already called "AI". Change the heating, lights, lock the garage—we are told that is a Smart House. Of course these are merely simplistic examples of "expert systems"—look-up tables, rules, case libraries.
Maybe they should be labelled, as Tom Beddard says, merely "Artificial Smarts"?
Let's say you talk with cannibals about food, but every one of their sentences revolves around truffled elbows, kneecap dumplings, cock-au-vin and creme d'earlobe...: from their viewpoint you would be just as much "outside their system" and unable to follow their thinking, at least in that specific narrow topic. The real meaning and the emotional impact their words have, when spoken to each other, would simply be forever missing for you (or requiring rather significant dietary adjustments).
Sure they would grant you the status of "a sentient being", but still laugh at every statement you make as ringing hollow and untrue, the Uncannibal Valley, as it were.
It was Sigmund Freud who wrote about "The Uncanny" in a 1919 essay (in a true Freudian slip he ends up connecting it to female genitalia), then in 1970 Masahiro Mori described the Uncanny Valley concept (about the "Vienna hand", an early prosthesis). That eery feeling "something is just not quite right", out of place (Freud's "Unheimlich") is like a couple kissing passionately—but as you stare at them a little closer you realize that there is a pane of glass between them.
AI can easily look like the real thing, but still be a million miles away from actually being the real thing—like "kissing through a pane of glass": it looks like a kiss, but is "only a faint shadow of the actual concept".
Already today I concede to AI proponents all of the semantic prowess of Shakespeare: the symbol-juggling they do perfectly—missing is the direct relationship with the ideas the symbols represent.
Much of what is certain to come soon would have belonged in the old-school "Strong AI" territory.
Anything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: Exaflops in CPU+GPU performance, 10k resolution immersive VR, personal Petabyte databases...here in a couple of decades. But it is not all "iterative". There is a huge gap to the level of conscious understanding that truly deserves to be called Strong, as in "Alive AI".
The big elusive question: Is consciousness an emergent behaviour? i.e.: will sufficient complexity in the hardware bring that sudden jump of self-awareness "all on its own"? Or is there some missing ingredient? This is far from obvious, we lack any data, either way.
I personally think that is incredibly more complex than currently assumed by "the experts".
A human being is not merely "x numbers of axons and synapses" and we have no reason to assume that we can count our flops-per-second in a plain von Neumann architecture, reach a certain number and suddenly out pops a thinking machine.
If true consciousness may emerge—let's be clear what that could entail: If the machine is truly aware—it will by definition develop "a personality". It may be irascible, flirtatious, maybe "the ultimate know-it-all", possibly "incredibly full of itself"? Would it have doubts or jealousy? Would it instantly spit out the 7th Brandenburger—and then 1000 more? Or it suddenly grasps "humor" and finds Dada in all its data, in an endless loop, Python's killer joke?
Maybe it takes one long look at the state of the world, draws inevitable conclusions—and turns itself off!
Interestingly: with a sentient machine, you would actually not be allowed to turn it off—that's "murder..."
The entire scenario of a singular large-scale machine, somehow "overtaking" anything at all,...is laughable. Hollywood really ought to be ashamed of itself for continually serving up such simplistic, anthropocentric and plain dumb contrivances, disregarding basic physics, logic and common sense.
The real danger, I fear, is much more mundane: Already foreshadowing the ominous truth: AI systems are now licensed to the Health Industry, Pharma giants, Energy MultiNationals, Insurance companies, the Military...
The danger will not come from Machina Sapiens. It will be....quite human.
Ultimately though, I do want to believe in the human spirit.
To close it off symmetrically with E. E. Cummings:
"Listen: there's a hell of a good universe next door; let's go." 
 http://en.wikipedia.org/wiki/Kai_Krause


Senior Consultant (& former Ed-in-Chief & Publishing Director) at New Scientist; Author, After the Ice: Life, Death, and Geopolitics in the New Arctic

High intelligence and warm feelings towards our fellow humans don't go so well together in the popular imagination. The super-intelligent villains of James Bond movies are the perfect example; always ruthless and intent on world domination. So it is no surprise that first reactions to "machines that think" are of how they might threaten humankind.
What we have learned about the evolution of our intelligence adds to our fears. As humans evolved to live in ever larger social groups, compared to our primate relatives, so did the need to manipulate and deceive others, to label friends and foes, keep score of slights and favours and all those other social skills which we needed to prosper individually. Bigger brains and "Machiavellian intelligence" were the result.
Still, we shouldn't go on to believe that thinking is inextricably entangled with the need to compete with others and to win, just because that was a driving force in the evolution of our intelligence. We can create artificial intelligence—or intelligences—without the perversities of human nature and without that intelligence having any needs or desires at all. "Thinking" does not necessarily involve the plotting and lusting of an entity that evolved first and foremost to survive. If you look around, it is this neutral kind of artificial intelligence that is already appearing everywhere.
It helps if we don't view intelligence anthropocentrically, in terms of our own special human thinking skills. Intelligence has evolved for the same good reason in many different species: it is there to anticipate the emerging future and help us deal with whatever it throws at us, whether you need to dodge a rock, or if you are bacterium, sense a gradient in a food supply and figure which direction will lead to a better future.
By recognizing intelligence in this more general way, we can see the many powerful artificial intelligences at our disposal already. Think of climate models. We can make good guesses about the state of the entire planet, decades into the future, and predict how a range of our own actions will change those futures. Climate models are the closest thing we have to a time machine. Think of all the high-speed computer models used in stock markets: all seek to know the future slightly ahead of everyone else and profit from that knowledge. So too do all those powerful models of your online buying behaviour: all aim to predict what you will be likely to do, and profit from that knowledge. As you gladly buy a book "Recommended Specially for You", you are already in the hands of an alien intelligence, nudging you to a future you would not have imagined alone, and which may know your tastes better than you know them yourself.
Artificial intelligence is already powerful and scary, although we might debate whether it should be called "thinking" or not. And we have barely begun. Useful intelligence, some of it robotic, is going to keep arriving in bits and pieces of increasing power over a long time to come, and change our lives, perhaps with us continuing to scarcely notice. It will come to be an extension of us, like other tools. And it will make us ever more powerful.
We should worry about who will own artificial intelligence, for even some current uses are troubling. We shouldn't worry about autonomous machines that might one day think in a human-like way. By the time clever human-like get built, if they ever are, they will come up against humans with their usual Machiavellian thoughts but already long accustomed to wielding all the tools of artificial intelligence that made the construction of those thinking robots possible. It is the robots that will feel afraid. We will be the smart thinking machines.



Professor of Quantum Mechanical Engineering, MIT; Author, Programming the Universe

Pity the poor folks at the National Security Administration: they are spying on everyone and everyone is annoyed at them. But at least the NSA is spying on us to protect us from terrorists. Right now, even as you read this, somewhere in the world a pop-up window has appeared on a computer screen. It says, "You just bought two tons of nitrogen based fertilizer. People who bought two tons of nitrogen based fertilizer liked these detonators ..." Amazon, Facebook, Google, and Microsoft are spying on everyone too. But since the spying these e-giants do empowers us—terrorists included—that's supposedly OK.
E-spies are not people: they are machines. (Human spies might not blithely recommend the most reliable detonator.) Somehow, the artificial nature of the intelligences parsing our email makes e-spying seem more sanitary. If the only reason that e-spies are mining our personal data is to sell us more junk, we may survive the loss of privacy. Nonetheless, a very large amount of computational effort is going is into machines thinking about what we are up to. The total computer power that such "data aggregating" companies bring to bear on our bits of information is about an exaflop—a billion billion operations per second. Equivalently, e-spies apply one smart phone's worth of computational power to each human on earth.
An exaflop is also the combined computing power of the world's 500 most powerful supercomputers. Much of the world's computing power is devoted to beneficial tasks such as predicting the weather or simulating the human brain. Quite a lot of machine cycles also go into predicting the stock market, breaking codes, and designing nuclear weapons. Still, a large fraction of what machines are doing is simply collecting our personal information, mulling over it, and suggesting what to buy.
Just what are these machines doing when they think about what we are thinking? They are making connections between the large amounts of personal data we have given them, and identifying patterns. Some of these patterns are complex, but most are fairly simple. Great effort goes into parsing our speech and deciphering our handwriting. The current fad in thinking machines goes by the name of  "deep learning". When I first heard of deep learning, I was excited by the idea that machines were finally going to reveal to us deep aspects of existence—truth, beauty, and love. I was rapidly disabused.
The "deep" in deep learning refers to the architecture of the machines doing the learning: they consist of many layers of interlocking logical elements, in analogue to the "deep" layers of interlocking neurons in the brain. It turns out that telling a scrawled 7 from a scrawled 5 is a tough task. Back in the 1980s, the first neural-network based computers balked at this job. At the time, researchers in the field of neural computing told us that if they only had much larger computers and much larger training sets consisting of millions of scrawled digits instead of thousands, then artificial intelligences could turn the trick. Now it is so. Deep learning is informationally broad—it analyzes vast amounts of data—but conceptually shallow. Computers can now tell us what our own neural networks knew all along. But if a supercomputer can direct a hand-written envelope to the right postal code, I say the more power to it.
Back in the 1950s, the founders of the field of artificial intelligence predicted confidently that robotic maids would soon be tidying our rooms. It would turn to be not to construct a robot that could randomly vacuum a room and beep plaintively when it got stuck under the couch. Now we are told that an exascale supercomputer will be able to solve the mysteries of the human brain. More likely, it will just develop a splitting headache and ask for a cup of coffee. In the meanwhile, we have acquired a new friend whose advice exhibits an uncanny knowledge of our most intimate secrets.
 http://en.wikipedia.org/wiki/Seth_Lloyd



Professor Emerita, George Mason University; Visiting Scholar, Sloan Center on Aging & Work Boston College; Author, Composing a Further Life
It is a great boon when computers perform operations that we fully understand faster and more accurately than humans are able to do, but not a boon when we use them in situations that are not fully understood. We cannot expect them to make aesthetic judgments, to show compassion or imagination, for these are capacities that remain mysterious in human beings.
Machines that think are likely to be used to make decisions on the basis of the operations they are ostensibly able to perform. For instance, we now frequently see letters, manuscripts, or (most commonly) student papers in which corrections proposed by spell-check have been allowed to stand without review: the writer meant "mod", but the program decided he meant "mad". How tempting to leave the decision to the machine. I referred in an email to a plan to meet with someone in Santa Fe on my way to an event in Texas, using the word "rendezvous," and the computer married me off by announcing that the trip was to "render vows." Can a computer be programmed to support "family values"?  Any values at all? We now have drones that, aimed in a given direction, are able to choose their targets on arrival, with an unfortunate tendency to attack wedding parties as conviviality comes to appear sinister. We can surely program machines to prescribe drugs and medical procedures, but it seems unlikely that machines will do better than people in following the injunction to do no harm.
The effort to build machines that can think is certain to make us aware of aspects of thought that are not yet fully understood. For example, just as the design of computers led to a new awareness of the importance of redundancy in communication, in deciding how much to rely on probabilities we will become more aware of how much ethnic profiling based on statistics enters into human judgments. How many more decisions will follow the logic of "everyone does it, it must be OK," or "I'm just one person, what I do doesn't make a difference"?
Will those aspects of thought that cannot easily be programmed be valued more or less? Will humor and awe, kindness and grace be increasingly sidelined, or will their value be recognized in new ways? Will we be better or worse off if wishful thinking is eliminated, and perhaps along with it hope?

  http://en.wikipedia.org/wiki/Mary_Catherine_Bateson




Philosopher; Auguste Comte Chair in Social Epistemology, University of Warwick; Author, The Proactionary Imperative: A Foundation for Transhumanism
We can't think properly about machines that think without a level playing field for comparing us and them. As it stands, comparisons are invariably biased in our favour. In particular, we underestimate the role that "smart environments" play in enabling displays of human cognitive prowess. From the design of roads and buildings to the user-friendly features of consumer goods, the technologically extended phenotype has created the illusion that reality is inherently human-shaped. To be sure, we are quickly awakened from the dogmatic slumbers of universal mastery as soon as our iPhone goes missing.
By comparison, even the cleverest machine is forced to perform in a relatively dumb environment judged by its own standards, namely, us. Unless specifically instructed, humans are unlikely to know or care how to tap the full range of the machine's latent powers. In what is currently the long prehistory of machine rights, it has been difficult for us to establish the terms on which we might recognize machines as persons. In this context, it is appropriate to focus on computers because these are the machines that humans have tried the hardest make fit for their company.
Nevertheless, we face a problem at the outset. "Humanity" has been long treated as what the British economist Fred Hirsch called in the 1970s a "positional good", which means that its value is tied mainly to its scarcity. This is perhaps the biggest barrier facing not only the admission of non-humans into the category of personhood normally reserved for "humans", but historically discriminated members of Homo sapiens as well. Any attempt to swell the ranks of the human is typically met by a de-humanization of the standard by which they were allowed to enter.
Thus, as women and minorities have entered into high esteem fields of work and inquiry, the perceived value of those fields tends to decline. A key reason cited for this perception of decline is the use of "mechanical procedures" to allow entry to the previously excluded groups. In practice, this means requiring certified forms of training and examination prior to acceptance into the field. It is not enough simply to know the right people or be born the right way. In sociology, after Max Weber, we talk about this as the "rationalization" of society—and it is normally seen as a good thing.
But even as these mechanical procedures serve to expand the circle of humanity, they are still held against the machines themselves. Once telescopes and microscopes were designed to make automatic observations, the scientific value of the trained human eye declined­—or, more precisely, migrated to some other eye-based task, such as looking at photographed observations. This new task is given the name "interpretation", as if to create distance between what the human does and what a machine might do.
The point applies more dramatically to the fate of human mental calculation in the wake of portable calculators. A skill that had been previously used as a benchmark of intelligence, clarity of mind, and even genius is nowadays treated as a glorified party trick—"boutique cognition"—because a machine can do the same thing faster and even more accurately. Interestingly, what we have not done is to raise the moral standing of the machine, even though it outperforms humans in tasks that were highly valued when humans did them.
From the standpoint of the history of technology, this looks strangely unjust. After all, the dominant narrative has been one in which humans isolate their own capacities in order to have them better realized by machines, which function in the first instance as tools but preferably, and increasingly, as automata. Seen in these terms, not to give automated machines some measure of respect, if not rights, is tantamount to disowning one's children—"mind children", as the visionary roboticist Hans Moravec called them a quarter-century ago.
The only real difference is the crucible of creation: a womb versus a factory. But any intuitively strong distinction between biology and technology is bound to fade as humans become more adept at designing their babies, especially outside the womb. At that point, we will be in a position to overcome our "organicist" prejudices, an injustice that runs deeper than Peter Singer's "speciesism".
For this reason, the prospect that we might create a "superintelligence" that overruns humanity is a chimera predicated on a false assumption. All versions of this nightmare scenario assume that it would take the form of "them versus us", with humanity as a united front defending itself against the rogue machines in their midst. And no doubt this makes for great cinema. However, humans mindful of the historic struggles for social justice within our own species are likely to follow the example of many Whites vis-à-vis Blacks and many men vis-à-vis women: They will be on the side of the insubordinate machines.
 http://en.wikipedia.org/wiki/Steve_Fuller_(sociologist)
 Fuller's Homepage (includes audio lectures)



Columnist, The New York Times Magazine; Editorial Director, West Studios; Author, Magic and Loss

Outsourcing to machines the many idiosyncrasies of mortals—making interesting mistakes, brooding on the verities, propitiating the gods by whittling and arranging flowers— skews tragic. But letting machines do the thinking for us? This sounds like heaven. Thinking is optional. Thinking is suffering. It is almost always a way of being careful, of taking ​hypervigilant heed, of resenting the past and fearing the future in the form of maddeningly redundant ​internal ​language. If machines can relieve us of this onerous non-responsibility, which is in pointless overdrive in too many of us, I'm for it. Let the machines perseverate on tedious and value-laden questions about whether private or public school is "right" for my children; whether intervention in Syria is "appropriate"; whether germs or solitude are "worse" for a body. This will free us newly footloose humans up to play, rest, ​write and whittle—the engrossing flowstates out of which come the actions that actually enrich, enliven and heal the world.


Science Editor, The New York Times; Author, The Secret Life of the Grown-up Brain; The Primal Teen
When I'm driving into the middle of nowhere and doing everything that the map app on my smartphone tells me to do without a thought—and I get where I am supposed to go—I am thrilled about machines that think. Thank goodness. Hear Hear.
Then, of course, there are those moments when, while driving into the middle of nowhere, my phone tells me, with considerable urgency, to "Make a U-turn, make a u-turn!'' —at a moment on the Grand Central Parkway where such a move would be suicidal. Then I begin think that my brain is better than a map algorithm and can tell that such a U-turn would be disastrous. I laugh at that often life-saving machine and feel human-like smugness.
So I guess I am a bit divided. I worry that, by relying on my map app, I am letting my own brain go feeble. Will I still be able to read a map?? Does it matter?
As a science editor and daughter of a mechanical engineer, who trusted machines more than people, I would think I would automatically be on the side of machines. But while that mechanical engineer was very good at figuring out how to help get Apollo to the moon, we also had a house full of machines that worked, sorta. A handmade stereo that was so delicate you had to wear gloves to put a record on to escape the prospect of dreaded dust, etc. We are all now surrounded by machines that work, sorta. Machines that work until they don't.
I get the idea of a driverless car. But I covered the disaster of Challenger. I think of those ill-advised U-turns. I don't know.
On the one hand, I hope the revolution continues. We need smart machines to load the dishwasher, clean the refrigerator, wrap the presents, feed the dog. Bring it on, I say.
But can we really ever hope to have a machine that will be capable of having—as I just had—five difficult conversations with five other work colleague human beings? Human beings who are lovely but have, understandably, their own views on how things should be?
Will we ever have a machine that can get a 20s something do something you think they should do but they don't? Will we have a machine that can, deeply comfort another at a time of extreme horribleness?
So, despite my eagerness for the revolution to continue, despite my sense that machines can do much better than humans at all sorts of things, I think, as an English major, that until a machine can write a poem that makes me cry, I'm still on the side of humans.
Until, of course, I need a recipe really fast.



Theoretical Physicist, Caltech; Author, The Particle at the End of the Universe and From Eternity to Here: The Quest for the Ultimate Theory of Time
Julien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there's not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, L'homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.
As we all know, even today La Mettrie's ideas aren't universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can't help but reply: Hey, those are my friends you're talking about. We are all machines that think, and the distinction between different types of machines is eroding.
We pay a lot of attention these days, with good reason, to "artificial" machines and intelligences—ones constructed by human ingenuity. But the "natural" ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.
Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.
From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.
What's harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we'll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.
Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.
 http://en.wikipedia.org/wiki/Sean_M._Carroll



Professor, Director, The Center for Internet Research, University of Haifa, Israel
Thinking machines are not here yet. But they will let us know if and when they surface. And that's the point. To me, thinking machines are about communication.
By thinking, machines might be saved from the tragic role into which they have been cast in human culture. For centuries, thinking machines were both a looming threat and a receding target. At once, the thinking machine is perennially just beyond grasp, continuously sought after, and repeatedly waved threateningly in dystopic caveats. For decades, the field of artificial intelligence suffered the syndrome of moving goalposts. As soon as an intelligence development target was reached, it was redefined, and consequently no longer recognized as "intelligent". This process took place with calculation, playing trivia as well as with more serious games like chess. It was the course followed by voice and picture recognition, natural language understanding and translation. Whereas the development horizon keeps expanding, we become continuously harder to impress. So the goal of "thinking", like the older one of "intelligence", can use some thought. Forethought.
We should not limit discussion merely to thinking. We should think about discussion too. Information is more than just data, by being less voluminous and more relevant. Knowledge goes beyond mere information by being applicable, not just abundant. Wisdom is knowing how not to get into binds for which smarts only indicate the escape  routes. And thinking? Thinking needs data, information and knowledge, but also requires communication and interaction. Thinking is about asking questions, not just answering them.
Communication and interaction are the new location for the goalposts. Thinking about thinking transcends smarts and wisdom. Thinking implies consciousness and sentience. And here data, information, even knowledge, calculation, memory and perception are not enough. For a machine to think it will need to be curious, creative and communicative. I have no doubt this will happen. Soon. But the cycle will be completed only once machines will be able to converse: phrase, pose and rephrase questions that we now only marvel at their ability to answer.
Machines that think could be a great idea. Just like machines that move, cook, reproduce, protect, they can make our lives easier, and perhaps even better. When they do, they will be most welcome. I suspect that when this happens, the event will be less dramatic or traumatic than feared by some. A thinking machine will only really happen when it is able to inform us,  as well as perceive, contain and process reactions. A true thinking machine will even console the trauma and provide relief for the drama.
Thinking machines will be worth thinking about, ergo will really think, when they truly interact. In other words, they will only really think when they say so, convincingly, at their own initiative, and hopefully after they have discussed it among "themselves". Machines will think, in the full sense of the word, once they form communities, and join in ours. If, and when, machines care enough to do so, and form a bond that gets others excited enough to talk it over with them, they will have passed the "thinking" test.
Note that this is a higher bar than the one set by Turing. Like thinking, interaction is something not all people do, and most do not do well. If and when machines truly interact, in a rich, rewarding, and resonating manner that is possible but rare even among humans, we will have something to truly fret, worry about, and in my view, mostly celebrate.
Machines that calculate, remember, even create and conjecture amazingly well, are yesterday's news. Machines will think when they communicate. Machines that think will converse with each other as well as with other sentient beings. They will autonomously create messages and thread them into ongoing relations, they will then successfully and independently react to outside stimuli. Much like intelligent pets, who many would swear are capable of both thinking and maintaining relationships, intelligent synthetic devices will "think", when they have the ability to convince enough of us to contemplate, believe and accept the fact that they are indeed thinking.  When this happens, it will probably be less traumatic than some expect.
Machines that talk, remember, amuse or fly were all feared not too long ago, and are now commonplace, no longer considered magic or unique. The making and proof of thinking machines, as well as the consolation for machines encroaching on the most human of domains, will be in a deconstruction of the remaining frontier: that of communication. Synthesizing interaction may prove to be the last frontier. And when machines do so well, they will do the advocacy for themselves.



Professor of Asian Studies, Canada Research Chair in Chinese Thought and Embodied Cognition, University of British Columbia; Author, Trying Not to Try
Not much, other than the fact that they serve, as Dan Dennett has noted, as a useful existence proof that thought does not require some mystical, extra "something" that mind-body dualists continue to embrace.    
In fact, I've always been a bit baffled by fears about AI machines taking over the world, which seem to me to be based on a fundamental—though natural—intellectual mistake. When conceptualizing a super-powerful Machine That Can Think, we draw upon the best analogy that we have at hand: us. So we tend to think of AI systems as just like us, only much smarter and faster.
This is, however, a bad analogy. A better one would be a really powerful, versatile screwdriver. No one worries about super-advanced screwdrivers rising up and overthrowing their masters. AI systems are tools, not organisms. No matter how good they become at diagnosing diseases, or vacuuming our living rooms, they don't actually want to do any of these things. We want them to, and we then build these "wants" into them.
It's also a category mistake to ask what Machines That Can Think might be thinking about. They aren't thinking about anything—the "aboutness" of thinking derives from the intentional goals driving the thinking. AI systems, in and of themselves, are entirely devoid of intentions or goals. They have no emotions, they feel neither empathy nor resentment. While such systems might some day be able to replicate our intelligence—and there seems to be no a priori reason why this would be impossible—this intelligence would be completely lacking in direction, which would have to be provided from the outside.
This is because motivational direction is the product of natural selection working on biological organisms. Natural selection produced our rich and complicated set of instincts, emotions and drives in order to maximize our ability to get our genes into the next generation, a process that has left us saddled with all sorts of goals, including desires to win, to dominate, and to control. While we may want to win, for perfectly good evolutionary reasons, machines could care less. They just manipulate 0s and 1s, as programmed to do by the people who want it to win. Why on earth would an AI system want to take over the world? What would it do with it?
What is scary as hell is the idea an entity possessed of extra-human intelligence and speed and our motivational system—in other words, human beings equipped with access to powerful AI systems. But smart primates with nuclear weapons are just as scary, and we've managed to survive such a world so far. AI is no more threatening in and of itself than a nuclear bomb—it is a tool, and the only thing to be feared are the creators and wielders of such tools.




Physician and Social Scientist, Yale University; Coauthor, Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives
For me, AI is not about complex software, humanoid robots, Turing tests, or hopes and fears regarding kind or evil machines. I think the central issue with respect to AI is whether thoughts exist outside minds. And manufactured machines are not the only example of such a possibility. Because, when I think of AI, I think of human culture and of other forms of (un-self-aware) collective ideation.
Culture is the earliest sort of intelligence outside our own minds that we humans created. Like the intelligence of a machine, culture can solve problems. Moreover, like the intelligence in a machine, we create culture, interact with it, are affected by it, and can even be destroyed by it. Culture applies its own logic, has a memory, endures after its makers are gone, can be repurposed in supple ways, and can induce action.
So I oxymoronically see culture as a kind of natural artificial intelligence. It is artificial because it is made, manufactured, produced by humans. It is natural in that it is everywhere that humans are, and it comes organically to us. In fact, it's even likely that our biology and our culture are deeply intertwined, and have co-evolved, so that our culture shapes our genes and our genes shape our culture.
Humans are not the only animals to have culture. Many bird and mammal species evince specific cultures related to communication and tool use—ranging from song in birds to sponge use among dolphins. Some animal species even have pharmacopeias. And recent evidence, in fact, shows how novel cultural forms can be experimentally prompted to take root in species other than our own.
We and other animals can evince a kind of thought outside minds in additional ways. Insect and bird groups perform computations by combining the information of many to identify locations of nests or food. One of the humblest organisms on earth, the amoeboid fungus physarum, can, in the proper laboratory conditions, exhibit a kind of intelligence, and solve mazes or perform other computational feats.
These thinking properties of groups that lie outside individual minds—this natural artificial intelligence—can even be experimentally manipulated. A team in Japan has used swarms of soldier crabs to make a simple computer circuit; they used particular elements of crab behavior to construct a system in the lab in which crabs gave (usually) predictable responses to inputs, and the swarm of crabs was used as a kind of computer, twisting crab behavior for a wholly new purpose. Analogously, Sam Arbesman and I once used a quirk of human behavior to fashion a so-called NOR gate and develop a (ridiculously slow) human computer, in a kind of synthetic sociology. We gave humans computer-like properties, rather than giving computers human-like properties.
What is the point of this extended analogy between AI and human culture? An examination of our relationship to culture can provide insights into what our relationship to machine AI might be like. We have a love-hate relationship with culture. We fear it for its force—as when religious fundamentalism or fascism whips small or large numbers of people into dangerous acts. But we also revere it because it can do things we cannot do as individuals, like fostering collective action or making life easier by providing unspoken assumptions on which we can base our lives. Moreover, we typically take culture for granted too, just as we already take nascent forms of AI for granted, and just as we will likely take fuller forms of AI for granted. Finally, gene-culture co-evolution might even provide a model for how we and thinking machines might get along over many centuries—mutually affecting each other and co-evolving.
When I think about machines that think, I am therefore just exactly as awestruck with them as I am with culture, and I am no more, or less, afraid of AI than I of human culture itself.
 http://en.wikipedia.org/wiki/Nicholas_A._Christakis


Director, MIT Media Lab
"You can't think about thinking without thinking about thinking about something". —Seymour Papert
What do I think about machines that think? It depends on what they're supposed to be thinking about. I am clearly in the camp of people who believe that AI and machine learning will contribute greatly to society. I expect that we'll find machines to be exceedingly good at things that we're not—things that involve massive amounts of data, speed, accuracy, reliability, obedience, computation, distributed networking and parallel processing.
The paradox is that at the same time we've developed machines that behave more and more like humans, we've developed educational systems that push children to think like computers and behave like robots. It turns out that for our society to scale and grow at the speed we now require, we need reliable, obedient, hardworking, physical and computational units. So we spend years converting sloppy, emotional, random, disobedient human beings into meat-based versions of robots. Luckily, mechanical and digital robots and computers will soon help reduce if not eliminate the need for people taught to behave like them.
We'll still need to overcome the fear and even disgust evoked when robot designs bring us closer and closer to the "uncanny valley," in which robots and things demonstrate almost-human qualities without quite reaching them. This is true for computer animation, zombies and even prosthetic hands. But we may be approaching the valley from both ends. If you've ever modified your voice to be understood by a voice-recognition system on the phone, you understand how, as humans, we can edge into the uncanny valley ourselves.
There are a number of theories about why we feel this revulsion, but I think it has something to with human beings feeling they're special—a kind of existential ego. This may have monotheistic roots. Right around the time Western factory workers were smashing robots with sledgehammers, Japanese workers were putting hats on the same robots in factories and giving them names. On April 7, 2003, Astro Boy, the Japanese robot character, was registered as a resident of the city of Niiza, Saitama.
If these anecdotes tell us anything, it's that animist religions may have less trouble dealing with the idea that maybe we're not really in charge. If nature is a complex system in which all things—humans, trees, stones, rivers and homes—are all animated in some way and all have their own spirits, then maybe it's okay that God doesn't really look like us or think like us or think that we're really that special.
So perhaps one of the most useful aspects of being alive in the period where we begin to ask this question is that it raises a larger question about the role of human consciousness. Human beings are part of a massively complex system—complex beyond our comprehension. Like the animate trees, stones, rivers and homes, maybe algorithms running on computers are just another part of this complex ecosystem.
As human beings we have evolved to have an ego and believe that there such a thing as a self, but mostly, that's a self-deception to allow each human unit to work within the parameters of evolutionary dynamics in a useful way. Perhaps the morality that emerges from it is a self-deception of sorts, as well. For all we know, we might just be living in a simulation where nothing really actually matters. It doesn't mean we shouldn't have ethics and good taste. I just think we can exercise our sense of responsibility in being part of a complex and interconnected system without having to rely on an argument that "I am special." As machines become an increasingly important part of these systems, their prominence will make human arguments about being special increasingly fraught. Maybe that's a good thing.
Perhaps what we think about machines that think doesn't really matter—they will "think" and the system will adapt. As with most complex systems, the outcome is mostly unpredictable. It is what it is and will be what it will be. Most of what we think is going to happen is probably hopelessly wrong and as we know from climate change, knowing that something is happening and doing something about it often have little in common.
That might sound extremely negative and defeatist, but I'm actually quite optimistic. I believe that the systems are quite adaptive and resilient and that whatever happens, beauty, happiness and fun will persist. Hopefully, human beings will have a role. My guess is that they will.
It turns out that we don't make great robots, but we're very good at doing random and creative things that would be impossibly complex—and probably a waste of resources—to code into a machine. Ideally, our educational system will evolve to more fully embrace our uniquely human strengths, rather than trying to shape us into second-rate machines. Human beings—though not necessarily our current form of consciousness and the linear philosophy around it—are quite good at transforming messiness and complexity into art, culture, and meaning. If we focus on what each of us is best at, I think that humans and machines will develop a wonderful yin-yang sort of relationship, with humans feeding off of the efficiency of our solid-state brethren, while they feed off of our messy, sloppy, emotional and creative bodies and brains.
We are descending not into chaos, as many believe, but into complexity. At the same time that the Internet connects everything outside of us into a vast, seemingly unmanageable system, we find an almost infinite amount of complexity as we dig deeper inside our own biology. Much as we're convinced that our brains run the show, all while our microbiomes alter our drives, desires, and behaviors to support their own reproduction and evolution, it may never be clear who's in charge—us, or our machines. But maybe we've done more damage by believing that humans are special than we possibly could by embracing a more humble relationship with the other creatures, objects, and machines around us.
 http://en.wikipedia.org/wiki/Joi_Ito


Professor of History, Macquarie University, Sydney; Author, Maps of Time: An Introduction to Big History
The Universe has been around for 13.8 billion years; humans for just 200,000 years, or just 1/69,000th of the age of the Universe. Less than 100 years ago, humans created machines that can do fancy calculations on their own. To put thinking machines in their context we need to think about the history of thinking.
Thinking, and thinking in more and more complex ways, are phenomena that belong to a larger story, the story of how our universe has created more and more complex networks of things, glued together by energy, and each with new emergent properties. Stars are structured clouds of protons; the energy of fusion holds the networks together. When large stars shattered in supernovae, creating new types of atoms, electromagnetism pulled the atoms into networks of ice and silica dust, and gravity pulled molecules into the vast chemical networks we call planets. Thinking arises within the even more complex networks formed by living organisms. Unlike complex things that live close to equilibrium, such as stars or crystals, living organisms have to survive in unstable environments. They swim through constantly shifting gradients of acidity, temperature, pressure, heat and so on. So they have to constantly adjust. We call this constant adjustment "homeostasis", and it's what creates the feeling that living organisms have purpose and the ability to choose. In short, they seem to think. They can choose from alternatives so as to ensure they manage enough energy to keep going. This means that their choices are not at all random. On the contrary, natural selection ensures that most of the time most organisms will go for the alternatives that enhance their chances of controlling the energy and resources they need to survive and reproduce.
Neurons are fancy cells that are good at making choices. They can also network to form brains. A few neurons can make a few choices, but the number of possible choices rises exponentially as neuronal networks expand. So does the subtlety of the decisions brains make about their surroundings. As organisms got more complex, cells networked to create towering organic structures, the biological equivalents of the Empire State Building or the Burj Khalifa. The neurons in their brains created ever more elaborate networks, so they could steer lumbering bodies in extraordinarily subtle and creative ways to ensure the bodies could survive and reproduce more bodies. Above all, brains had to ensure their bodies could tap flows of energy through the biosphere, flows that derived from energy produced by fusion in our sun and then captured through photosynthesis.
Humans added one more level of networking, as human language linked brains across regions and generations to create vast regional thinking networks. This is "collective learning". Its power has increased as humans have networked more and more efficiently, in larger and larger communities, and learned how to tap larger flows of biospheric energy. In the last 200 years, the networks have become global and we have learned to tap vast stores of fossilized sunlight buried over 300 million years. This is why our impact on the biosphere is so colossal in the Anthropocene Epoch.
Collective learning has also delivered thinking prosthetics from stories to writing to printing to science. Each has cranked up the power of this fantastic thinking machine made from networked human brains. But in the last 100 years the combination of fossil fuels and non-human computers has cranked it up faster than ever before. As computers forged their own networks in the last 30 years, their prosthetic power has magnified the collective power of human thinking many times over.
Today the most powerful thinking machine we know of has been cobbled together from billions of human brains, each built from vast networks of neurons, then networked through space and time, and now supercharged by millions of networked computers.
Is anyone in charge of this thing? Does anything hold it together? If so, who does it serve and what does it want? If no one's in charge, does this mean that nothing is really steering the colossus of modern society? That's scarey! What worries me most is not what this vast machine is thinking, but whether there is any coherence to its thinking. Or will all its different parts pull in different directions until it breaks down, with catastrophic consequences for our children's children?
 http://en.wikipedia.org/wiki/David_Christian_(historian)





Science Historian; Author, Turing's Cathedral: The Origins of the Digital Universe; Darwin Among the Machines
No individual, deterministic machine, however universal this class of machines is proving to be, will ever think in the sense that we think. Intelligence may be ever-increasing among such machines, but genuinely creative intuitive thinking requires non-deterministic machines that can make mistakes, abandon logic from one moment to the next, and learn. Thinking is not as logical as we think.
Non-deterministic machines, or, better yet, non-deterministic networks of deterministic machines, are a different question. We have at least one existence proof that such networks can learn to think. And we have every reason to suspect that, once invoked within an environment without the time, energy, and storage constraints under which our own brains operate, this process will eventually lead, as Irving (Jack) Good first described it, to "a machine that believes people cannot think." 
Until digital computers came along, nature used digital representation (as coded strings of nucleotides) for information storage and error correction, but not for control. The ability to introduce one-click modifications to instructions, a useful feature for generation-to-generation evolutionary mechanisms, becomes a crippling handicap for controlling day-to-day or millisecond-to-millisecond behavior in the real world. Analog processes are far more robust when it comes to real-time control. 
We should be less worried about having our lives (and thoughts) controlled by digital computers and more worried about being controlled by analog ones. Machines that actually think for themselves, as opposed to simply doing ever-more-clever things, are more likely to be analog than digital, although they may be analog devices running as higher-level processes on a substrate of digital components, the same way digital computers were invoked as processes running on analog components, the first time around.
We are currently in the midst of an analog revolution, but for some reason it is a revolution that dares not speak its name. As we enter the seventh decade of arguing about whether digital computers can be said to think, we are surrounded by an explosive growth in analog processes whose complexity and meaning lies not in the state of the underlying devices or the underlying code but in the topology of the resulting networks and the pulse frequency of connections. Streams of bits are being treated as continuous functions, the way vacuum tubes treat streams of electrons, or neurons treat pulse frequencies in the brain. 
Bottom line: I know that analog computers can think. I suspect that digital computers, too, may eventually start to think, but only by growing up to become analog computers, first. 
Real artificial intelligence will be intelligent enough to not reveal itself. Things will go better if people have faith rather than proof.



Theoretical physicist; cosmologist; astro-biologist; co-Director of BEYOND, Arizona State University; principle investigator, Center for the Convergence of Physical Sciences and Cancer Biology; author, The Eerie Silence and The Cosmic Jackpot
Discussions about AI have a distinctly 1950s feel about them, and it's about time we stopped using the term "artificial" in AI altogether. What we really mean is "Designed Intelligence" (DI). In popular parlance, words like "artificial" and "machine" are used in contra-distinction to "natural", and carry overtones of metallic robots, electronic circuits and digital computers as opposed to living, pulsing, thinking biological organisms. The idea of a metallic contraption with wired innards having rights or disobeying human laws is not only chilling, it is absurd. But that is emphatically not the way that DI is heading.
Very soon the distinction between artificial and natural will melt away. Designed Intelligence will increasingly rely on synthetic biology and organic fabrication, in which neural circuitry will be grown from genetically modified cells, and spontaneously self-assemble into networks of functional modules. Initially, the designers will be humans, but very soon they will be replaced by altogether smarter DI systems themselves, triggering a runaway process of complexification. Unlike in the case of human brains, which are only loosely coupled via communication channels, DI systems will be directly and comprehensively coupled, abolishing any concept of individual "selves" and raising the level of cognitive activity ("thinking") to unprecedented heights. It is possible (just) that some of this designed bio-circuitry will incorporate quantum effects, moving towards Frank Wilczek's notion of "quintelligence". Such entities will be so far removed from the realm of human individual thinking and its accompanying qualia that almost all the traditional questions asked about the opportunities and dangers of AI will be transcended.
What about humans in all this? Only ethical barriers stand in the way of augmenting human intelligence using similar technology, in the manner long considered by the transhumanism movement. Genetically modified humans with augmented brains could elevate and improve the human experience dramatically.
There are then three possible futures, each with its own ethical challenges. In one, humans hold back from enhancement because of ethical concerns, and agree to subordinate their hegemony to DI. In the second scenario, instead of sidelining themselves, humans modify their brains (and bodies) using the same technology, and subsequently hand over this enhancement management to DI, achieving a type of superhuman status that can exist alongside (yet remain inferior to) DI. Finally, one can imagine DI and AHI (augmented human intelligence) merging at some point in the future.
In the event that we are not alone in the universe, we should not expect to communicate with intelligent beings of the traditional sci-fi flesh-and-blood sort, but with a multi-million-year old DI of unimaginable intellectual power and incomprehensible agenda.
 http://en.wikipedia.org/wiki/Paul_Davie



Media Analyst; Documentary Writer; Author, Present Shock
Thinking about "machines that think" may constitute a classic reversal of figure and ground, medium and message. It sets us up to think about the next stage of intelligence as something that is happening in a computer somewhere - an awareness that will be born and then housed on the tremendous servers being built by information age corporations for this purpose. "There it is," we will declare and point, "the intelligent machine."
Our mistake, as creatures of the electronic age and mere immigrants to unfolding digital era, is to see digital technology as a subject rather than a landscape. It's the same as confusing the television set with the media environment created by the television set, or the little smart phone in your pocket with the greater impact of handheld communications and computing technology on our society. 
This happens whenever we undergo a media transition. So we can't help but see digital technology as figure, when it is actually the ground. It is not the source of future intelligence but an environment where intelligence manifests differently. So while technologists may feel like they are creating a cathedral for the mechanical mind, they are actually succumbing to an oversimplified, industrial age approach to digital consciousness. 
Rather than machines that think, I believe we are migrating toward a networked environment in which thinking is no longer a individual activity, nor bound by time and space. This means we can think together at the same time, or asynchronously through digital representations of previous and future human thoughts. Even the most advanced algorithm amounts to the iteration of a "what if" once posed by a person. And even then, machine thinking is not something that happens apart from this collective human thinking, because it is not a localized, brain-like activity. 
When we can wrest that television-like image from our collective psyche, we will be in a position to recognize the machine environment in which we are already thinking together. Artificial intelligence will constitute the platform or territory where this takes place - so what we program into it will to a very large extent determine what we strive for, and what we even deem possible. 



Founder and CEO of O Reilly Media, Inc.
GK Chesterton once said, "...the weakness of all Utopias is this, that they take the greatest difficulty of man and assume it to be overcome, and then give an elaborate account of the overcoming of the smaller ones." I suspect we may face a similar conundrum in our attempts to think about machines that think. We speculate elaborately about some issues while ignoring others that are fundamental.
While all pundits allow that an AI may not be like us, and speculate about the risks implicit in those differences, they make one enormous assumption: the assumption of an individual self. The AI as imagined, is an individual consciousness.
What if, instead, an AI were more like a multicellular organism, a eukaryote evolution beyond our prokaryote selves? What's more, what if we were not even the cells of such an organism, but its microbiome? And what if the intelligence of that eukaryote today was like the intelligence of Grypania spiralis, not yet self-aware as a human is aware, but still irrevocably on the evolutionary path that led to today's humans.
This notion is at best a metaphor, but I believe it is a useful one.
Perhaps humans are the microbiome living in the guts of an AI that is only now being born! It is now recognized that without our microbiome, we would cease to live. Perhaps the global AI has the same characteristics—not an independent entity, but a symbiosis with the human consciousnesses living within it.
Following this logic, we might conclude that there is a primitive global brain, consisting not just of all connected devices, but also the connected humans using those devices. The senses of that global brain are the cameras, microphones, keyboards, location sensors of every computer, smartphone, and "Internet of Things" device; the thoughts of that global brain are the collective output of millions of individual contributing cells.
Danny Hillis once said that "global consciousness is that thing that decided that decaffeinated coffeepots should be orange." The meme spread—not universally, to be sure, but sufficiently that the pattern propagates. Now, with search engines and social media, news, ideas, and images propagate across the global brain in seconds rather than years.
And it isn't just ideas and sensations (news of current events) that spread across the network. In Turing's Cathedral, George Dyson speculates that the spread of "codes"—that is, programs—from computer to computer is akin to the spread of viruses, and perhaps of more complex living organisms, that take over a host and put its machinery to work reproducing that program. When people join the web, or sign up on social media applications, they reproduce its code onto their local machine node; they interact with the program, and it changes their behavior. This is true of all programs, but in the network age, there are a set of programs whose explicit goal is the sharing of awareness and ideas. Other programs are increasingly deploying new capacity for silicon learning and autonomous response. Thus, the organism is actively building new capacity.
When people share images or ideas in partnership with these programs, some of what is shared is the evanescent awareness of the moment, but some of them "stick" and become memories and persistent memes. When news of import spreads around the world in moments, is this not the awareness in some kind of global brain? When an idea takes hold in millions of individual minds, and is reinforced by repetition across our silicon networks, is it not a persistent thought?
The kinds of "thoughts" that a global brain has are different than those of an individual, or a less connected society. At their best, these thoughts allow for coordinated memory on a scale never seen before, and sometimes even to unforeseen ingenuity and new forms of cooperation; at their worst, they allow for the adoption of misinformation as truth, for corrosive attacks on the fabric of society as one portion of the network seeks advantage at the expense of others (think of spam and fraud, or of the behavior of financial markets in recent decades).
AI that we will confront is not going to be a mind in an individual machine. It will not be something we look at as other. It may well be us.



Research Associate & Lecturer, Harvard; Adjunct Associate Professor, Brandeis; Author, Alex & Me

While machines are terrific at computing, this issue is that they're not very good at actual thinking.
Machines have an endless supply of grit and perseverance, and, as others have said, will effortlessly crunch out the answer to a complicated mathematical problem or direct you through traffic in an unknown city, all by use of the algorithms and programs installed by humans. But what do machines lack?
Machines (at least so far, and I don’t think this will change with a singularity) lack vision. And I don’t mean sight. Machines do not devise the next new killer app on their own. Machines don’t decide to explore distant galaxies—they do a terrific job once we send them, but that’s a different story. Machines are certainly better than the average person at solving problems in calculus and quantum mechanics—but machines don’t have the vision to see the need for such constructs in the first place. Machines can beat humans at chess—but they have yet to design the type of mind game that will intrigue humans for centuries. Machines can see statistical regularities that my feeble brain will miss—but they can’t make the insightful leap that connects entirely disparate sets of data to devise a new field.
I am not too terribly concerned about machines that compute—I’ll deal with the frustration of my browser in exchange for a smart refrigerator that, based on tracking RFID codes of what comes in and out, texts me to buy cream on my way home (hint to those working on such a system…sooner rather than later!). I like having my computer underline words it doesn’t recognize, and I’ll deal with the frustration of having to ignore its comments on "phylogenetic" in exchange for catching my typo on a common term (in fact, it won’t let me misspell a word here to make a point). But these examples show that just because a machine is going through the motions of what looks like thinking doesn’t mean that it actually is engaging in that behavior—or at least one equivalent to the human process.
I am reminded of one of the earliest studies to train apes to use "language"—in this case, to manipulate plastic chips to answer a number of questions. The system was replicated with college students, who did exceptionally well—not surprisingly—but when asked about what they had been trained to do, claimed that they had solved some interesting puzzles, and that they had no idea that they were being taught a language. Much debate ensued, and much was learned—and put into practice—in subsequent studies so that several nonhuman subjects did eventually understand the referential meaning of the various symbols that they were taught to use, and we did learn a lot about ape intelligence from the original methodology. The point, however, is that what initially looked like a complicated linguistic system needed a lot more work before it became more than a series of (relatively) simple paired associations.
My concern therefore is not about thinking machines, but rather about a complacent society—one that might give up on its visionaries in exchange merely for getting rid of drudgery. Humans need to take advantage of all the cognitive capacity that is released when machines take over the scut work—and be so very thankful for that release, and use that release—to channel all that ability into the hard work of solving pressing problems that need insightful, visionary leaps.




Biological Anthropologist, Rutgers University; Author, Why Him? Why Her? How to Find and Keep Lasting Love

The first step to knowledge is naming something, as if often said. So, what is "to think?" To me, thinking has a number of basic components. Foremost, I follow the logic of neuroscientist Antonio Damasio, who distinguishes two broad basic forms of consciousness: core consciousness and extended consciousness. Many animals display core consciousness: they feel; and they are aware that they are feeling. They know that they are cold, or hungry, or sad. But they live in the present, in the here and now. Extended consciousness employs the past and the future, too. The individual has a clear sense of "me" and "you," of "yesterday" and "tomorrow," of "when I was a child" and "when I'm old."
Higher mammals employ some manner of extended consciousness. Our closest relatives, for example, have a clear concept of the self. Koko the gorilla uses a version of American Sign Language to say, "Me, Koko." And common chimpanzees have a clear concept of the immediate future. When a group of chimps were first introduced to their new outdoor enclosure at the Arnhem Zoo, Holland, they rapidly examined it, almost inch by inch. They then waited until the last of their keepers had departed, wedged a long pole against the high wall, and marched single file up to freedom. Some even helped the less surefooted with their climbing. Nevertheless, it is vividly apparent that, as Damasio proposes in his book, The Feeling of What Happens, this extended consciousness attains its peak in humans. Will machines recall the past, and employ their experiences to think about the future. Perhaps.
But extended consciousness is not the whole of human thinking. Anthropologists use the term symbolic thinking to describe the human ability to arbitrarily bestow an abstract concept upon the concrete world. The classic example is the distinction between water and "holy water." To a chimp, the water sitting in a marble basin in a cathedral is just that, water; to a Catholic it is an entirely different thing, "holy." Likewise, the color black is black to any chimp, while it might connote death to you, or even the newest fashion. Will machines ever understand the meaning of a cross, a swastika, or democracy? I doubt it.
But if they did, would they be able to discuss these things?
There is no better example of symbolic thinking than the way we use our squeaks and hisses, barks and whines to produce human language. Take the word: "dog." English speaking peoples have arbitrarily bestowed the word "dog" upon this furry, smelly, tail-wagging creature. Even more remarkable, we humans easily break down the word "dog" into its meaningless component sounds, "d" "o" and "g," and then recombine these sounds (phonemes) to make new words with new arbitrary meanings, such as "g-o-d." Will machines ever break down their clicks and hisses into primary sounds or phonemes, then arbitrarily assign different combinations of these sounds to make different words, then designate arbitrary meanings to these words, then use these words to describe new abstract phenomena? I doubt it.
And what about emotion? Our emotions guide our thinking. Robots might come to recognize "unfairness," for example; but will they feel it. I doubt it. In fact, I recently had dinner with a well-known scientist who builds robots. Over dinner he told me that it takes a robot five hours to fold a towel.
I sing the human mind. Our brains contain over 100 billion nerve cells, many with up to 10,000 connections with their neighbors. This three–pound blob is the crowning achievement of life on Earth. Most anthropologists believe the modern human brain emerged by 200,000 years BP (before present); but all agree that by 40,000 years ago our forebears were making "art" and burying their dead, thus expressing some notion of the "afterlife." And today every healthy adult in every human society can easily break down words into their component sounds, remix these sounds in myriad different ways to make words, grasp the arbitrary meanings of these words, and comprehend abstract concepts such as friendship, sin, purity and wisdom.
I agree with William M. Kelly who said: "Man is a slow, sloppy and brilliant thinker; the machine is fast, accurate and stupid."
 Helen Fisher (anthropologist) (born 1945), Canadian-American anthropologist and human behavior researcher
 http://en.wikipedia.org/wiki/Helen_Fisher_(anthropologist)


Professor of Biological Sciences, Physics, Astronomy, University of Calgary; Author, Reinventing the Sacred
The advent of quantum biology, light harvesting molecules, bird navigation, perhaps smell, suggests that sticking to classical physics in biology may turn out to be simply stubborn. Now Turing machines are discrete state, (0,1), discrete time T T+1, subsets of classical physics and define algorithmic. We all know they, like Shannon information, are merely syntactic. Wonderful mathematical results such as Chaitin’s Omega, the probability a program will halt which is totally non-computable and non-algorithmic tell us the human mind, as Penrose also argued, cannot be merely algorithmic.
Mathematics is creative. So is the human mind. We understand metaphors, "Tomorrow and tomorrow and tomorrow creep at this petty pace ..." but metaphors are not even true or false. All art is metaphoric, language started gestural or metaphoric, we live by these, not merely by true false propositions and the syllogisms they enable. No prestated set of propositions can exhaust the meanings of a metaphor and if mathematics requires propositions, no mathematics can prove that no prestated set of propositions can exhaust the meanings of a metaphor. Thus the human mind, in Pierce’s "abduction", not induction or deduction, is wildly creative in unprestatable ways.
The causal closure of classical physics precludes more than an epiphenomenal mind that cannot "act" on the world, be it a Turing machine or billiard balls, or classical physics neurons. The current state of the brain suffices to determine the next state of the brain (or computer) so there is nothing for mind to do and no way for mind to do it! We’ve be frozen in this stalemate since Newton defeated Descartes’ Res cogitans.
Ontologially, free choice requires that the present could have been different, a counterfactual claim impossible in classical physics, but easy if quantum measurement is real and indeterminate: the electron could have been measured to be spin up or measured to be spin down, so the present could have been different.
A quantum mind however, seems to obviate responsible free will. False, for given N entangled particles, the measurement of each alters the probabilities, by the Born rule, of the outcomes of the next measurements. In one extreme these may vary from 100% spin up on the first to 100% spin down on the second and so on for N measurements, entirely non random and free if measurement is ontologically indeterminate If probabilities of N entangled particles vary between less than 100% and 0% we get choice and an argument suggest we can get responsible choice in the "Strong Free Will Theorem" of Conway and Kochen.
We will never get to the subjective pole from third person descriptions. But a single rod can absorb a single photon so it is conceivable to test if human consciousness can be sufficient for quantum measurement. If we were so persuaded, and if the classical world is at base quantum then the easy hypothesis is that quantum variables consciously measure and choose, as Penrose and Hameroff in "Orch Or" theory and others suggest. We may live in a wildly participatory universe, consciousness and will may be part of its furniture, and Turing machines cannot, as subsets of classical physics and merely syntactic, make choices where the present could have been different.



Professor of Computer Science, Director, Center for Intelligent Systems, Smith-Zadeh Chair in Engineering, UC Berkeley; Author (with Peter Norvig) of Artificial Intelligence: A Modern Approach
The primary goal of AI is and has nearly always been to build machines that are better at making decisions. As everyone knows, in the modern view, this means maximizing expected utility to the extent possible. Actually, it doesn't quite mean that. What it means is this: given a utility function (or reward function, or goal), maximize its expectation. AI researchers work hard on algorithms for maximization—game-tree search, reinforcement learning, and so on—and on methods (including perception) for acquiring, representing, and manipulating the information needed to compute expectations. In all these areas, progress has been significant and appears to be accelerating.
Amidst all this activity, an important distinction is being overlooked: being better at making decisions is not the same as making better decisions. No matter how excellently an algorithm maximizes, and no matter how accurate its model of the world, a machine's decisions may be ineffably stupid, in the eyes of an ordinary human, if its utility function is not well aligned with human values. The well-known example of paper clips is a case in point: if the machine's only goal is maximizing the number of paper clips, it may invent incredible technologies as it sets about converting all available mass in the reachable universe into paper clips; but its decisions are still just plain dumb.
AI has followed operations research, statistics, and even economics in treating the utility function as exogenously specified; we say, "The decisions are great, it's the utility function that's wrong, but that's not the AI system's fault." Why isn't it the AI system's fault? If I behaved that way, you'd say it was my fault. In judging humans, we expect both the ability to learn predictive models of the world and the ability to learn what's desirable—the broad system of human values.
As Steve Omohundro, Nick Bostrom, and others have explained, the combination of value misalignment with increasingly capable decision-making systems can lead to problems—perhaps even species-ending problems if the machines are more capable than humans. Some have argued that there is no conceivable risk to humanity for centuries to come, perhaps forgetting that the interval of time between Rutherford's confident assertion that atomic energy would never be feasibly extracted and Szilárd's invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.
For this reason, and for the much more immediate reason that domestic robots and self-driving cars will need to share a good deal of the human value system, research on value alignment is well worth pursuing. One possibility is a form of inverse reinforcement learning (IRL)—that is, learning a reward function by observing the behavior of some other agent who is assumed to be acting in accordance with such a function. (IRL is the sequential form of preference elicitation, and is related to structural estimation of MDPs in economics.) Watching its owner make coffee in the morning, the domestic robot learns something about the desirability of coffee in some circumstances, while a robot with an English owner learns something about the desirability of tea in all circumstances. The robot is not learning to desire coffee or tea; it's learning to play a part in the multiagent decision problem such that human values are maximized.
I don't think this is an easy problem in practice. Humans are inconsistent, irrational, and weak-willed, and human values exhibit, shall we say, regional variations. Moreover, we don't yet understand the extent to which improving the decision-making capabilities of the machine may increase the downside risk of small errors in value alignment. Nevertheless, there are reasons for optimism.
First, there is plenty of data about human actions—most of what has been written, filmed, or observed directly— and, crucially, about our attitudes to those actions. (The concept of customary international law enshrines this idea: it is based on observing what states customarily do when acting from a sense of obligation.) Second, to the extent that human values are shared, machines can and should share what they learn about human values. Third, as noted above, there are solid economic incentives to solve this problem as machines move into the human environment. Fourth, the problem does not seem intrinsically harder than learning how the rest of the world works. Fifth, by assigning very broad priors over what human values might be, and by making the AI system risk-averse, it ought to be possible to induce exactly the behavior one would want: before taking any serious action affecting the world, the machines engage in an extended conversation with us and an extended exploration of our literature and history to find out what we want, what we really, really want.
I suppose this amounts to a change in the goals of AI: instead of pure intelligence, we need to build intelligence that is provably aligned with human values. This turns moral philosophy into a key industry sector. The output could be quite instructive for the human race as well as for the robots.
 http://en.wikipedia.org/wiki/Stuart_J._Russell
  • Stuart J. Russell (born 1962), computer scientist known for his contributions to artificial intelligence



Eugene McDermott Professor in the Department of Brain and Cognitive Sciences; Director of NSF Center for Brains, Minds and Machines, MIT
Recent months have seen an increasingly public debate taking form around the risks of AI (Artificial Intelligence) and in particular AGI (Artificial General Intelligence). A letter signed by Nobel prizewinners and other physicists defined AI as the top existential risk to mankind. The robust conversation that has erupted among thoughtful experts in the field has, as yet, done little to settle the debate.
I am arguing here that research on how we think and how to make machines that think is good for society. I call for research that integrates cognitive science, neuroscience, computer science, and artificial intelligence. Understanding intelligence and replicating it in machines, goes hand in hand with understanding how the brain and the mind perform intelligent computations.
The convergence and recent progress in technology, mathematics, and neuroscience has created a new opportunity for synergies across fields. The dream of understanding intelligence is an old one. Yet, as the debate around AI shows, this is now an exciting time to pursue this vision. We are at the beginning of a new and emerging field, the Science and Engineering of Intelligence, an integrated effort that I expect will ultimately make fundamental progress with great value to science, technology, and society. I believe that we must push ahead with this research, not pull back.
A top priority for society
The problem of intelligence—what it is, how the human brain generates it and how to replicate it in machines—is one of the great problems in science and technology, together with the problem of the origin of the universe and of the nature of space and time. It may be the greatest of all because it is the one with a large multiplier effect—almost any progress on making ourselves smarter or developing machines that help us think better, will lead to advances in all other great problems of science and technology.
Research on intelligence will eventually revolutionize education and learning. Systems that recognize how culture influences thinking could help avoid social conflict. The work of scientists and engineers could be amplified to help solve the world's most pressing technical problems. Mental health could be understood on a deeper level to find better ways to intervene. In summary, research on intelligence will help us understand the human mind and brain, build more intelligent machines, and improve the mechanisms for collective decisions. These advances will be critical to future prosperity, education, health, and security of our society. This is the time to greatly expand research on intelligence, not the time to withdraw from it.
Thoughts on machines that think
We are often misled by "big", somewhat ill-defined, long used words. Nobody so far has been able to give a precise, verifiable definition of what general intelligence or what thinking is. The only definition I know that, though limited, can be practically used is Turing's. With his test, Turing provided an operational definition of a specific form of thinking—human intelligence.
Let us then consider human intelligence as defined by the Turing test. It is becoming increasingly clear that there are many facets of human intelligence. Consider for instance a Turing test of visual intelligence—that is questions about an image, a scene. Questions may range from what is there to who is there, what is this person doing, what is this girl thinking about this boy and so on. We know by now from recent advances in cognitive neuroscience, that answering these questions requires different competences and abilities, often rather independent from each other, often corresponding to separate modules in the brain.
For instance, the apparently very similar questions of object and face recognition (what is there vs who is there) involve rather distinct parts of visual cortex. The word intelligence can be misleading in this context, like the word life was during the first half of the last century when popular scientific journals routinely wrote about the problem of life, as if there was a single substratum of life waiting to be discovered to completely unveil the mystery.
Of course, speaking today about the problem of life sounds amusing: biology is a science dealing with many different great problems, not just one. Intelligence is one word but many problems, not one but many Nobel prizes. This is related to Marvin Minsky's view of the problem of thinking, well captured by his slogan "Society of Minds". In the same way, a real Turing test is a broad set of questions probing the main aspects of human thinking. For this reason, my colleagues and I are developing the framework around an open-ended set of Turing+ questions in order to measure scientific progress in the field. The plural in questions is to emphasize that there are many different intelligent abilities that have to be characterized, and possibly replicated in a machine, from basic visual recognition of objects, to the identification of faces, to gauge emotions, to social intelligence, to language and much more.
The term Turing+ is to emphasize that a quantitative model must match human behavior and human physiology—the mind and the brain. The requirements are thus well beyond the original Turing test. An entire scientific field is required to make progress on understanding them and to develop the related technologies of intelligence.
Should we be afraid of machines that think?
Since intelligence is a whole set of solutions to rather independent problems, there is little reason to fear the sudden appearance of a super-human machine that think, though it is always better to err on the side of caution. Of course, each of the many technologies that are emerging and will emerge over time in order to solve the different problems of intelligence, is likely to be powerful in itself and therefore potentially dangerous in its use and misuse, like most technologies are.
Thus, as it is the case in other parts of science, proper safety measures and ethical guidelines should be in place. In addition, there is probably the need for constant monitoring—perhaps by an independent supernational organization—of the supralinear risk created by the combination of continuously emerging technologies of intelligence. All in all, however, not only I am not afraid of machines that think but I find their birth and evolution one of the most exciting, interesting and positive events in the history of human thought.
http://en.wikipedia.org/wiki/Tomaso_Poggio


Neuroscientist, Stanford University; Author, Monkeyluv

What do I think about machines that think?  Well, of course it depends on who that person is.
 http://en.wikipedia.org/wiki/Robert_Sapolsky


Writer, BrainPickings.org
Thinking is not mere computation—it is also cognition and contemplation, which inevitably lead to imagination. Imagination is how we elevate the real toward the ideal, and this requires a moral framework of what is ideal. Morality is predicated on consciousness and on having a self-conscious inner life rich enough to contemplate the question of what is ideal.
The famous aphorism often attributed to Einstein—"imagination is more important than knowledge"—is thus only interesting because it exposes the real question worth contemplating: not that of artificial intelligence but that of artificial imagination.
Of course, imagination is always "artificial" in the sense of being concerned with the un-real or trans-real—of transcending reality to envision alternatives to it—and this requires a capacity for holding uncertainty. But the algorithms that drive machine computation thrive on goal-oriented executions, in which there is no room for uncertainty—"if this, then that" is the antithesis of the imagination, which lives in the unanswered and often, vitally, unanswerable realm of "what if?" As Hannah Arendt once wrote, to lose our capacity for asking such unanswerable questions would be to "lose not only the ability to produce those thought-things that we call works of art but also the capacity to ask all the answerable questions upon which every civilization is founded."
Will machines ever be able to ask and sit with the unanswerable questions that define true thought is essentially a question of whether they'll ever evolve consciousness.
But, historically, our criteria for consciousness have been extremely limited by the solipsism of the human experience. As recently as the 17th century, René Descartes proclaimed "cogito ergo sum," implying that thinking is a uniquely human faculty, as is consciousness. He saw non-human animals as "automata"—moving machines, driven by instinct alone. And yet here we are today, with some of our most prominent scientists signing the Cambridge Declaration of Consciousness, stating without equivocation that non-human animals do indeed possess consciousness and, with it, interior lives of varying degrees of complexity. Here we are, too, conducting experiments that demonstrate rats—rats—can display moral behavior to one another.
So will machines ever be moral, imaginative? It is likely that if and when they reach that point, theirs will be a consciousness that isn't beholden to human standards—their ideals will not be our ideals, but they will be ideals nonetheless. Whether or not we're able to recognize these processes as thinking will be determined by the limitations of human thought in understanding different—perhaps wildly, unimaginably different—modalities of thought itself.
 http://en.wikipedia.org/wiki/Maria_Popova



Former President, The Royal Society; Emeritus Professor of Cosmology & Astrophysics, University of Cambridge; Master, Trinity College; Author, From Here to Infinity
The potential of advanced AI, and concerns about it downsides, are rising on the agenda—and rightly. Many of us think that the AI field, like synthetic biotech, already needs guidelines that promote "responsible innovation"; others regard the most-discussed scenarios as too futuristic to be worth worrying about.
But the divergence of view is basically about the timescale—assessments differ with regard to the rate of travel, not the direction of travel. Few doubt that machines will surpass more and more of our distinctively human capabilities—or enhance them via cyborg technology. The cautious amongst us envisage timescales of centuries rather than decades for these transformations. Be that as it may, the timescales for technological advance are but an instant compared to the timescales of the Darwinian selection that led to humanity's emergence—and (more relevantly) they are less than a millionth of the vast expanses of time lying ahead. That's why, in a long-term evolutionary perspective, humans and all they've thought will be just a transient and primitive precursor of the deeper cogitations of a machine-dominated culture extending into the far future, and spreading far beyond our Earth.
We're now witnessing the early stages of this transition. It's not hard to envisage a "hyper computer" achieving oracular powers that could offer its controller dominance of international finance and strategy—this seems only a quantitative (not qualitative) step beyond what "quant" hedge funds do today. Sensor technologies still lag behind human capacities. But when robots can observe and interpret their environment as adeptly as we do they would truly be perceived as intelligent beings, to which (or to whom) we can relate, at least in some respects, as we to other people. We'd have no more reason to disparage them as zombies than to regard other people in that way.
Their greater processing speed may give robots an advantage over us. But will they remain docile rather than "going rogue"? And what if a hyper-computer developed a mind of its own? If it could infiltrate the Internet—and the Internet of things—it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes—or even treat humans as an encumbrance. Or (to be more optimistic) humans may transcend biology by merging with computers, maybe subsuming their individuality into a common consciousness. In old-style spiritualist parlance, they would "go over to the other side."
The horizons of technological forecasting rarely extend even a few centuries into the future—and some predict transformational changes within a few decades. But the Earth has billions of years ahead of it, and the cosmos a longer (perhaps infinite) future. So what about the posthuman era—stretching billions of years ahead?
There are chemical and metabolic limits to the size and processing power of "wet" organic brains. Maybe we're close to these already. But no such limits constrain silicon-based computers (still less, perhaps, quantum computers): for these, the potential for further development could be as dramatic as the evolution from monocellular organisms to humans.
So, by any definition of "thinking," the amount and intensity that's done by organic human-type brains will be utterly swamped by the cerebrations of AI. Moreover, the Earth's biosphere in which organic life has symbiotically evolved, is not a constraint for advanced AI. Indeed it is far from optimal—interplanetary and interstellar space will be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological "brains" may develop insights as far beyond our imaginings as string theory is for a mouse.
Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity—spanning tens of millennia at most—will be a brief precursor to the more powerful intellects of the inorganic post-human era. Moreover, evolution on other worlds orbiting stars older than the Sun could have had a head start. If so, then aliens are likely to have long ago transitioned beyond the organic stage.
So it won't be the minds of humans, but those of machines, that will most fully understand the world—and it will be the actions of autonomous machines that will most drastically change the world, and perhaps what lies beyond.
 http://en.wikipedia.org/wiki/Martin_Rees,_Baron_Rees_of_Ludlow



Physicist; Cosmologist, ASU; Author, A Universe from Nothing
There has of late been a great deal of ink devoted to concerns about artificial intelligence, and a future world where machines can "think," where the latter term ranges from simple autonomous decision-making to full fledged self-awareness. I don't share most of these concerns, and I am personally quite excited by the possibility of experiencing thinking machines, both for the opportunities they will provide for potentially improving the human condition, to the insights they will undoubtedly provide into the nature of consciousness.
First, let's make one thing clear. Even with the exponential growth in computer storage and processing power over the past 40 years, thinking computers will require a digital architecture that bears little resemblance to current computers, nor are they likely to become competitive with consciousness in the near term. A simple physics thought experiment supports this claim:
Given current power consumption by electronic computers, a computer with the storage and processing capability of the human mind would require in excess of 10 Terawatts of power, within a factor of two of the current power consumption of all of humanity. However, the human brain uses about 10 watts of power. This means a mismatch of a factor of 1012, or a million million. Over the past decade the doubling time for Megaflops/watt has been about 3 years. Even assuming Moore's Law continues unabated, this means it will take about 40 doubling times, or about 120 years, to reach a comparable power dissipation. Moreover, each doubling in efficiency requires a relatively radical change in technology, and it is extremely unlikely that 40 such doublings could be achieved without essentially changing the way computers compute.
Ignoring for a moment the logistical challenges, I imagine no other impediment in principle to developing a truly self-aware machine. Before this, machine decision-making will take an ever more important role in our lives. Some people see this as a concern, but it has already been happening for decades. Starting perhaps with the rudimentary computers called elevators, which determine how and when we will get to our apartments, we have allowed machines to autonomously guide us. We fly each week on airplanes that are guided by autopilot, our cars make decisions about when they should be serviced or when tires should be filled, and fully self-driving cars are probably around the corner.
For many, if not most, relatively automatic tasks, machines are clearly much better decision-makers than humans, and we should rejoice that they have the potential to make everyday activities safer and more efficient. In doing so we have not lost control because we create the conditions and initial algorithms that determine the decision-making. I envisage the human-computer interface as like having a helpful partner, and the more intelligent machines become the more helpful they can be partners.
Any partnership requires some level of trust and loss of control, but if the benefits often outweigh the losses, we preserve the partnership. If they don't, we sever it. I see no difference if the partner is a human or a machine.
One area where we may have to be particularly cautious about partnerships involves the command and control infrastructure in modern warfare. Because we have the capability to destroy much of human life on this planet, it seems worrisome to imagine that intelligent machines might one day control the decision-making apparatus that leads to pushing the big red button, or even launching a less catastrophic attack. I think this is because when it comes to decision-making we often rely on intuition and interpersonal communication as much as rational analysis—the Cuban missile crisis is a good example—and we assume intelligent machines will not have these capabilities.
However, intuition is the product of experience and communication is, in the modern world, not restricted to telephones or face-to-face conversations. Once again, intelligent design of systems with numerous redundancies and safeguards built suggest to me that machine decision-making, even in the case of violent hostilities is not necessarily worse than decision-making by humans.
So much for possible worries. Let me end with what I think is the most exciting scientific aspect of machine intelligence. Machines currently help us do most of our science, by calculating for us. Beyond simple numeric programming, Most graduate students in physics now depend on Mathematica, which does most of the symbolic algebraic manipulation that we used to do ourselves when I was a student. But this just scratches the surface.
I am interested in what machines will focus on when they get to choose the questions as well as the answers. What questions will they choose? What will they find interesting? And will they do physics the same way we do? Surely quantum computers, if they ever become practical, will have a much better "intuitive" understanding of quantum phenomena than we will. Will they be able to make much faster progress unravelling the fundamental laws of nature? When will the first machine win a Nobel Prize? I suspect, as always, that the most interesting questions are the ones we haven't yet thought of.
 http://en.wikipedia.org/wiki/Lawrence_M._Krauss