商品紹介
With rapid advancement in development of artificial intelligences (AIs) and robots in the current world, some people predict that a society where robots and human beings coexist is approaching in the near future. However, I simply wonder if we could actually get along with robots in this world, where we cannot accept diversity even among the same human beings. If we had robots in this deeply divided world, would not it merely end up causing even greater chaos?
I recently started researching on morality engine to control behavior of robots. Simply put, I study how to make robots distinguish good and evil by themselves for the upcoming future when robots and human beings coexist.
The concept of morality for robots is not anything new. Back in 1940s, for example, an American science fiction writer Isaac Asimov started introducing his famous Three Laws of Robotics in his novels.
The Three Laws are very well known, and some people even treat them like golden rules for robots to observe. However, to me, these Laws seem to hold significant problems and therefore to be unsuitable for practical purposes. As you read this book, you will be able to figure out the fundamental defect in the Laws.
In order to study the moral engine with which to regulate robots, we need to first describe the moral framework of human beings. We can make possible such an attempt to model an abstract concept, by using an engineering way of thinking as a tool. In this book, I would like to think about this framework together with you, using as simple and easy words as possible.
If we can model human morality, we will be able to install it onto brains of robots. If we can build a moral system that robots and human beings ? mutually different existences ? can share, it will in turn help us to overcome divisions resulting from differences in standpoints among human beings, and to further develop an inclusive and diverse society. Using such a new moral system, I would like to establish alternative principles to Asimov’s Three Laws of Robotics and to think about the possibility of a society where human beings and robots coexist. Morality and robots may seem to have nothing in common ― but by looking at the point where these two areas actually cross, we will be able to see principles of a future society that we human beings should aim at.
Throughout this intensive seminar, we are going to freely and widely develop our arguments. I plan to also provide you with a summary and practice exercises at the end of each session to help deepen your understanding. Let us make ourselves ready for thinking outside the box, digging deep into our imagination.
Contents
Introduction
Session 1. Is the “You Shall Not Kill” Rule Universal?
Session 2. Classifying Prior Moral Thoughts
Session 3. You Shall Not Kill… Whom?
Session 4. Modeling the Basic Principle of Morality
Session 5. Classifying Hierarchy of Morality
Session 6. Installing Morality onto Robots
Afterword
Hints for Practice Exercises
References
I recently started researching on morality engine to control behavior of robots. Simply put, I study how to make robots distinguish good and evil by themselves for the upcoming future when robots and human beings coexist.
The concept of morality for robots is not anything new. Back in 1940s, for example, an American science fiction writer Isaac Asimov started introducing his famous Three Laws of Robotics in his novels.
The Three Laws are very well known, and some people even treat them like golden rules for robots to observe. However, to me, these Laws seem to hold significant problems and therefore to be unsuitable for practical purposes. As you read this book, you will be able to figure out the fundamental defect in the Laws.
In order to study the moral engine with which to regulate robots, we need to first describe the moral framework of human beings. We can make possible such an attempt to model an abstract concept, by using an engineering way of thinking as a tool. In this book, I would like to think about this framework together with you, using as simple and easy words as possible.
If we can model human morality, we will be able to install it onto brains of robots. If we can build a moral system that robots and human beings ? mutually different existences ? can share, it will in turn help us to overcome divisions resulting from differences in standpoints among human beings, and to further develop an inclusive and diverse society. Using such a new moral system, I would like to establish alternative principles to Asimov’s Three Laws of Robotics and to think about the possibility of a society where human beings and robots coexist. Morality and robots may seem to have nothing in common ― but by looking at the point where these two areas actually cross, we will be able to see principles of a future society that we human beings should aim at.
Throughout this intensive seminar, we are going to freely and widely develop our arguments. I plan to also provide you with a summary and practice exercises at the end of each session to help deepen your understanding. Let us make ourselves ready for thinking outside the box, digging deep into our imagination.
Contents
Introduction
Session 1. Is the “You Shall Not Kill” Rule Universal?
Session 2. Classifying Prior Moral Thoughts
Session 3. You Shall Not Kill… Whom?
Session 4. Modeling the Basic Principle of Morality
Session 5. Classifying Hierarchy of Morality
Session 6. Installing Morality onto Robots
Afterword
Hints for Practice Exercises
References
マイメニュー
何か良い本ないかな?
おトクに読める本は?
探してる本はあるかな?
- 詳細検索
- 著者別検索
- 出版社別検索
- 書籍トップ
- 書籍一覧
- ビジネス書・政治・経済
- 小説一般
- 推理・ミステリー小説
- 歴史・戦記・時代小説
- ライトノベル
- コンピュータ・IT
- ホラー・怪奇小説
- SF・ファンタジー小説
- アクション・ハードボイルド小説
- 経済・社会小説
- エッセイ
- ノンフィクション
- 恋愛小説
- ハーレクイン小説
- 英語・語学
- 教育・教養
- 辞書
- 旅行・アウトドア・スポーツ
- 料理・生活
- 趣味・雑学・エンタメ
- 詩歌・戯曲
- 絵本・児童書
- マルチメディア
- 写真集
- ボーイズラブ
- アダルト
- 雑誌トップ
- 雑誌一覧
- ビジネス・政治経済
- 総合週刊誌・月刊誌
- モノ・トレンド
- 男性誌
- 女性誌
- 自動車・乗り物
- コンピュータ・サイエンス
- スポーツ・アウトドア
- エンターテイメント・グラビア
- 暮らし・食・教育
- 趣味・芸術・旅行
- コミック雑誌
- NHKテキスト[語学]
- NHKテキスト[一般]
- 有料メルマガ
- 無料コンテンツ/カタログ
書籍を探す
コミックを探す
雑誌を探す
新聞を探す
リンク
ヘルプ