Anonymous A (OP) double-posted this 8 years ago, 4 minutes later, 13 minutes after the original post[^][v]#899,169
@899,167 (C)
Your interest in a topic is dependent on other people posting about them on chans?
Anonymous C replied with this 8 years ago, 3 minutes later, 16 minutes after the original post[^][v]#899,171
@previous (A)
Why not make an interesting topic and see if anyone is interested? The Board loves quality posts.
Anonymous A (OP) replied with this 8 years ago, 6 minutes later, 23 minutes after the original post[^][v]#899,173
@previous (C)
This is a topic about machine learning.
What is your general opinion about the current state of it? Have you ever considered testing out any of the ideas for yourself or have any applications in mind for your own uses/interest?
Anonymous B replied with this 8 years ago, 9 minutes later, 32 minutes after the original post[^][v]#899,175
> What is your general opinion about the current state of it?
i don't have a general opinion on it mate
Anonymous C replied with this 8 years ago, 18 seconds later, 32 minutes after the original post[^][v]#899,176
@899,173 (A)
Most of my limited dabbling into neural networks has been in pretty simple vision applications for research. People are applying machine learning to all kinds of problems now. I have no idea what the current state of the field is like. Anything particular you find interesting?
Anonymous A (OP) replied with this 8 years ago, 14 minutes later, 47 minutes after the original post[^][v]#899,181
@previous (C)
I'm interested in reinforcement learning because I think the main limitations in making these ideas more useful is in training them in a reasonable way (not giving them millions of hand-labeled images and then training on that)
Maybe you've heard of some of the stuff deepmind is doing?
Like training networks to play atari or do really well at Go (without any human supervision--just learning from playing itself)
Another cool idea, I think, are the actor critic approaches. Critic tries to estimate how well the actor will do in terms of reward and the actor is trained to emphasize decisions that give higher reward than the value network estimated.
Practically, I'd like to eventually make a strategy game like Civilization and play against strong AI opponents.
But, yeah, overall I know there's a lot of hype about all this stuff. But I think a lot of it is justified so many things can be done now with machine learning that weren't possible just 5-6 years ago (although much of it is just due to hardware limitations)
Anonymous C replied with this 8 years ago, 39 minutes later, 1 hour after the original post[^][v]#899,197
@previous (A) > I'm interested in reinforcement learning because I think the main limitations in making these ideas more useful is in training them in a reasonable way (not giving them millions of hand-labeled images and then training on that)
Yeah, training on a set can produce great responses for a limited application, but the results don't generalize well. I haven't read a lot about the process of reinforcement learning. It's obviously doing some incredible stuff. Deepmind is tackling games like chess and go in a new way that might give us different ways of looking at game strategy.
Anonymous A (OP) replied with this 8 years ago, 18 hours later, 20 hours after the original post[^][v]#899,291
@previous (C) > Deepmind is tackling games like chess and go in a new way that might give us different ways of looking at game strategy.
I'm optimistic that it's going to be a lot more than that, though. Beating the world Go champion was remakrable because, unlike chess, you can't just brute force your way into good performance by tree searches alone--the state space is so massive that you'll not even cover a tiny fraction of it (far more states than atoms in the universe). The neural networks were used to provide some 'intuition' to guide the search towards good moves.
I'm actually not aware of any (specific) aspect of mental function of humans that these networks have not already made decent progress in--for example, I saw a paper recently about training them to generate wikipedia intros from the references https://arxiv.org/abs/1801.10198
Maybe what also fascinates me is that the research is all pretty accessible. With even one commodity gpu card (~$600-$1500), some python and cuda C knowledge, it's not that difficult to replicate a lot of what's out there. Even the methods used in the deepmind Go papers (although at a much smaller scale, but in my experience, even at a small scale you can get networks to learn some non-trivial strategies in games like reversi)...
I feel like society hasn't fully grasped the implications of all this. At some point we're going to be faced with a lot of existential questions about the role of humanity when the vast majority of the population will be able to produce literally nothing (intellectually, creatively, and otherwise) that machines will not be able to do better. Although, maybe we won't face these questions. I don't think, dogs, for example are much bothered by their roles. Maybe we won't either.
Anonymous D joined in and replied with this 8 years ago, 12 minutes later, 20 hours after the original post[^][v]#899,293
@previous (A) > existential questions about the role of humanity
I have only enough time to eat drink and be merry. A few males here, want to eat drink and be mary.
Anonymous A (OP) replied with this 8 years ago, 1 hour later, 21 hours after the original post[^][v]#899,314