 |
 |
 |
The Official Hentaiverse Chat, Post your random thoughts or theorycrafts about HV |
|
|
 |
|
Apr 18 2017, 09:48
|
nec1986
Group: Gold Star Club
Posts: 2,569
Joined: 12-October 14

|
Still long way to go for computers to make something decent. A bit offtop, but what you think about machine improvement? Definitely at some point its gonna grow from current few years kid intelligence to clever adults-scientists. And because it can work in thousands or even millions times faster than our brain then its gonna be 100 years history progress in 1 hour-month worktime. Basically in last 100 years ppl created first airplane and spaceships, radio, phones, computers, surely auto, good cameras, internet, movies, even such thing as electricity wasnt very common 100 years ago. So many ppl was using fire for lighting. Its extremely perspective tech which ll probably lead to even stronger exponential development.
From that point splits few problems: 1. How we can control it, because its clever than humans. 2. Wont it lead to world war, because any country wanna get such progress and take under control whole world. Well, maybe thats not correct word, not country, but rich people in those.
|
|
|
|
 |
|
Apr 18 2017, 10:08
|
Scremaz
Group: Gold Star Club
Posts: 24,304
Joined: 18-January 07

|
i'd like to remind you that pcs are better on *merely mathematical* subjects/matters. plus, they can only work in a rigid sequential fashion, afaik. no chance for them to adjust a bit the algorythm themselves when needed, unless the coder specified a bunch of alternative cases.
|
|
|
|
 |
|
Apr 18 2017, 10:56
|
Superlatanium
Group: Gold Star Club
Posts: 7,595
Joined: 27-November 13

|
A topic that is very important (IMO), but few know about it. QUOTE(nec1986 @ Apr 18 2017, 07:48)  A bit offtop, but what you think about machine improvement? I think the eventual result of AI machine learning will have a large hand in determining the ultimate fate of civilization. It's a very long way off - strong AI probably won't be a thing for many decades, possibly not in our lifetimes - but it'll probably be one of the biggest milestones in history. QUOTE(nec1986 @ Apr 18 2017, 07:48)  Definitely at some point its gonna grow from current few years kid intelligence to clever adults-scientists. More than that.  Strong AI has the potential to solve a great many problems, technological and otherwise, but we first have to make sure its goal is in alignment with human values. "There are strong reasons to expect that almost any powerful AGI not explicitly programmed to be benevolent to humans is lethal," and figuring out those values precisely enough to define is an extraordinarily difficult task. QUOTE(nec1986 @ Apr 18 2017, 07:48)  1. How we can control it, because its clever than humans. Mostly, we probably can't. I think there are only two possibilities: (1) We keep it in a "box" to heavily restrict interaction with the outside world - but eventually, it will likely [ intelligence.org] convince someone to let it out of the box, Pandora's Box is opened, and we pray that we've programmed it correctly. (2) We don't build it in the first place. (Unfortunately, if one entity has the potential to build a strong AI, chances are high that other researchers will figure it out in the near future; unless we wipe ourselves out first, someone will eventually turn on a strong AI, or some other sufficiently powerful optimization process). Reading on this topic brings you to interesting ideas such as [ en.wikipedia.org] Newcomb's Problem, [ wiki.lesswrong.com] Roko's Basilisk, and the [ en.wikipedia.org] Doomsday argument. Fascinating stuff to think about, at least to me. I know these sort of topics look somewhat suspicious from the outside view, but that's only social bias talking, and it's not a sound reason to conclude that the worldview is wrong. QUOTE(Scremaz @ Apr 18 2017, 08:08)  i'd like to remind you that pcs are better on *merely mathematical* subjects/matters. plus, they can only work in a rigid sequential fashion, afaik. no chance for them to adjust a bit the algorythm themselves when needed, unless the coder specified a bunch of alternative cases. For now, yes. For the next few decades (maybe), yes. But eventually, someone will likely figure out how to code an AI capable of [ lesswrong.com] recursive self-improvement, which will be capable of modifying itself to become far more intelligent than any human in a very short span of time. Once that happens, then if the AI has a goal, we must hope that it's been programmed properly (and that we've solved major questions in ethical philosophy), else the consequences [ en.wikipedia.org] could be disastrous.
|
|
|
|
 |
|
Apr 18 2017, 11:53
|
Scremaz
Group: Gold Star Club
Posts: 24,304
Joined: 18-January 07

|
well, without going as far as imagining a Skynet-like scenario (which is quite the extreme outbring, but pretty sure that someone else apart me thought about), wouldn't something like the [ en.wikipedia.org] laws of robotic (eventually expanded to include environment, animals and such) be enough to prevent many of these dangers? throw them into a really solid section of the IA (ie its fundamentals, its kernel or whatever cannot be modified by the IA itself) in the form of axioms and in my naivety i think it should be already a good starting point.
|
|
|
|
 |
|
Apr 18 2017, 13:08
|
Superlatanium
Group: Gold Star Club
Posts: 7,595
Joined: 27-November 13

|
QUOTE(Scremaz @ Apr 18 2017, 09:53)  well, without going as far as imagining a Skynet-like scenario (which is quite the extreme outbring, but pretty sure that someone else apart me thought about), wouldn't something like the [ en.wikipedia.org] laws of robotic (eventually expanded to include environment, animals and such) be enough to prevent many of these dangers? This [ io9.gizmodo.com] specific example was discussed in an interview of a couple of experts. In short, they're ideas that can be played with (and subverted) in interesting ways in fiction, but they have very little to do with actual consistent, systematic ethical systems. More contemporary ethical philosophy is more closely related to variants of consequentialism/utilitarianism. Ethics (in my opinion) needs to be much more reflexively consistent than any hard set of premade rules. The scope of the issue might be more clear by looking at the zeroth law: "0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm." But what, ultimately, is good for humanity, or should count as harm? (those might be the same question) Sure, we can answer some specific, limited questions on that front, such as "do I rescue the girl, or leave her there to be hit by a car?" - but reality has countless numbers of interlocking situations involving a multitude of values humans care about. "Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc." Human values are [ wiki.lesswrong.com] complex. Coming up with a precise definition of what humanity as a whole should value (and of what a friendly AI should value) is an incredibly difficult problem.
|
|
|
|
 |
|
Apr 18 2017, 13:19
|
Sapo84
Group: Gold Star Club
Posts: 3,332
Joined: 14-June 09

|
QUOTE(Scremaz @ Apr 18 2017, 11:53)  wouldn't something like the [ en.wikipedia.org] laws of robotic (eventually expanded to include environment, animals and such) be enough to prevent many of these dangers? They don't really work in Asimov's stories, I really doubt that they would be very useful in reality XD But, well, if we can create an AI with the ability to kill humanity someone a bit crazier than the rest can and will program one to kill his "enemies" (I mean, we are risking a nuclear war now we have seen gas attacks, we know someone is crazy enough to do this shit). Let's be honest, our biggest threat and enemies are still fellow humans for the foreseeable future.
|
|
|
|
 |
|
Apr 18 2017, 13:36
|
yami_zetsu
Group: Gold Star Club
Posts: 2,686
Joined: 25-February 13

|
QUOTE(Sapo84 @ Apr 18 2017, 06:19)  They don't really work in Asimov's stories, I really doubt that they would be very useful in reality XD
But, well, if we can create an AI with the ability to kill humanity someone a bit crazier than the rest can and will program one to kill his "enemies" (I mean, we are risking a nuclear war now we have seen gas attacks, we know someone is crazy enough to do this shit). Let's be honest, our biggest threat and enemies are still fellow humans for the foreseeable future.
don't forget climate change, can the environment withstand another 3 or 4 centuries more?
|
|
|
|
 |
|
Apr 18 2017, 19:19
|
jacquelope
Group: Members
Posts: 10,436
Joined: 28-July 15

|
Rainbow smoothies at 100K in WTB? Is that where prices are going to go or are they going to crash? Act now, act later?
That alone puts me ahead despite all I spent on ancient fruits, thanks to the unusual luck with energy drinks.
|
|
|
Apr 18 2017, 19:23
|
Scremaz
Group: Gold Star Club
Posts: 24,304
Joined: 18-January 07

|
QUOTE(jacquelope @ Apr 18 2017, 19:19)  Rainbow smoothies at 100K in WTB? Is that where prices are going to go or are they going to crash? Act now, act later?
already? uh, that man is against the show...
|
|
|
Apr 18 2017, 19:25
|
Juggernaut Santa
Group: Gold Star Club
Posts: 11,132
Joined: 26-April 12

|
It will raise.
|
|
|
Apr 18 2017, 19:34
|
jacquelope
Group: Members
Posts: 10,436
Joined: 28-July 15

|
QUOTE(End Of All Hope @ Apr 18 2017, 10:25)  It will raise.
Past 100k? Then I shall wait!
|
|
|
Apr 18 2017, 19:36
|
Slobber
Group: Gold Star Club
Posts: 7,794
Joined: 4-February 11

|
u guys sound like those people playing stocks that wait for "the best price" =\
|
|
|
Apr 18 2017, 19:38
|
Juggernaut Santa
Group: Gold Star Club
Posts: 11,132
Joined: 26-April 12

|
QUOTE(jacquelope @ Apr 18 2017, 19:34)  Past 100k? Then I shall wait!
Let's see if a competitor exist first. If it exist, it will be Stocking Stuffers The Return. QUOTE(Slobber @ Apr 18 2017, 19:36)  u guys sound like those people playing stocks that wait for "the best price" =\
Well, opening topics for 55 and 56k while sssss2 buys for double that, it's the same as that (IMG:[ invalid] style_emoticons/default/laugh.gif)
|
|
|
|
 |
|
Apr 18 2017, 19:44
|
jacquelope
Group: Members
Posts: 10,436
Joined: 28-July 15

|
QUOTE(End Of All Hope @ Apr 18 2017, 10:38)  Let's see if a competitor exist first. If it exist, it will be Stocking Stuffers The Return. Well, opening topics for 55 and 56k while sssss2 buys for double that, it's the same as that (IMG:[ invalid] style_emoticons/default/laugh.gif) I know blackjac00 pretty much BTFO'd competitors with his previous ludicrous high bids for trophies. I'm waiting on that but his 450k bid for Blenders may remain unbeaten.
|
|
|
|
 |
|
Apr 18 2017, 19:47
|
nec1986
Group: Gold Star Club
Posts: 2,569
Joined: 12-October 14

|
Indeed, it can decide with humanity safety reason to put all ppl in cages under control (IMG:[ invalid] style_emoticons/default/biggrin.gif) And this way its not possibly to harm each other. Closer to reality. Skynet and so on scenario isnt likely to happen. Possibly in theory? Yep, i think so. For example, there is logical sense to use super-intelligence in military goals. Auto-pilot can react much faster and make better judgments. Its clever at tactic and strategy either. So just use it and it ll give much faster and better decision. Once AI has direct control on weapons then any mistake in goals can lead to opposite result. Im not very good at such tech stuff, but last time i see AI becoming stronger in games than humans. First it was something simple as tic-tac-toe, after that checkers and chess, but it was more or less fine. Its more about counting every possible move and machine is good at that. And because there are games with much higher amount of moves then ppl thought "its fine, pc wont be stronger than humans soon". And yea, then boom, AI can be stronger even without such deep counting. People just decided to use neural networks and not so long ago Alphago easy won 60-0 against top professionals in such game as go (19x19 board with something around 2*10^170 possible moves). How they made it? They created some basic model and algorithm for learning. Computer was just playing with itself and learned how to win more often. So we already made small step to self-learning programs. At the moment its just small part. People make overall strategy and goal for computer and its just "random-like" picking middle until final result is satisfying. But next step might be just giving desirable result and making computer to find way to get it. Its not probably about exact close future, because human brain is quite complicated thing and computer isnt so poweful (tho works much faster), but with time one and other area becomes easier/better for AI. P.S. Thx Superlatanium for interesting links.
|
|
|
|
 |
|
Apr 18 2017, 19:51
|
jacquelope
Group: Members
Posts: 10,436
Joined: 28-July 15

|
QUOTE(nec1986 @ Apr 18 2017, 10:47)  Indeed, it can decide with humanity safety reason to put all ppl in cages under control (IMG:[ invalid] style_emoticons/default/biggrin.gif) And this way its not possibly to harm each other. Closer to reality. Skynet and so on scenario isnt likely to happen. Possibly in theory? Yep, i think so. For example, there is logical sense to use super-intelligence in military goals. Auto-pilot can react much faster and make better judgments. Its clever at tactic and strategy either. So just use it and it ll give much faster and better decision. Once AI has direct control on weapons then any mistake in goals can lead to opposite result. In battle, aimbots should always win. It should not in any way be a "roger roger" cockup. Robots should always hit inherently slow-moving human targets.
|
|
|
|
 |
|
Apr 18 2017, 19:52
|
Juggernaut Santa
Group: Gold Star Club
Posts: 11,132
Joined: 26-April 12

|
Talking about games, there is one thing that AI will likely never get, thus never get to a human level: irrationality.
Not even getting to the emotional part, I'll put a plain example. In a game, an AI will NEVER do a bad move in order to lure the player in making an even worst one, and win like that. And this is only the tip of the iceberg.
|
|
|
|
 |
|
Apr 18 2017, 20:07
|
Scremaz
Group: Gold Star Club
Posts: 24,304
Joined: 18-January 07

|
QUOTE(End Of All Hope @ Apr 18 2017, 19:38)  Let's see if a competitor exist first. If it exist, it will be Stocking Stuffers The Return.
more likely Pot of Gold the return. pretty sure the old Stocking Stuffers bidwar won't happen again, not even at christmas. bidders learned it was a bit too much the hard way. QUOTE(End Of All Hope @ Apr 18 2017, 19:38)  Well, opening topics for 55 and 56k while sssss2 buys for double that, it's the same as that (IMG:[ invalid] style_emoticons/default/laugh.gif) well, sssss opened a topic for 50k. then me and slobber opened ours for slightly higher. i don't know about slobber, but i simply wanted to reach a certain threshold and letting sssss having all the rest. but given the fact that he immediately raised so much, i guess he was already positive about sinking so many credits into that. as for me, i don't really think i'll raise my price more than that. btw, what to shrine for? (IMG:[ invalid] style_emoticons/default/duck.gif)
|
|
|
|
 |
|
Apr 18 2017, 21:03
|
jacquelope
Group: Members
Posts: 10,436
Joined: 28-July 15

|
|
|
|
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:
|
 |
 |
 |
|