The Acoustic Guitar Forum

Go Back   The Acoustic Guitar Forum > Other Discussions > Open Mic

Reply
 
Thread Tools
  #61  
Old 05-30-2023, 07:31 PM
tbeltrans tbeltrans is offline
Charter Member
 
Join Date: Mar 2008
Location: Twin Cities
Posts: 8,097
Default

Quote:
Originally Posted by Sadie-f View Post
Chess software that beat the world champion Kasparov used absolutely no AI, thus it's not a great point of comparison. That was 1997, and involved parallel computation and custom parallel hardware used for move evaluation. It was a brute force approach, and that continues to be a good approach for chess; Stockfish works the same way and can beat any human player running on a cellphone.

Getting to that point with Go took another 18 years (a huge amount of time in computer science). Winning at Go required both brute force search and neural network AI components, and yes, when repurposed to play chess,.that system, AlphaZero can handily beat Stockfish. Of course there are also pure AI chess engines, and Leela, the best of them is an approximate match for Stockfish.

A more intellectually interesting AI chess engine would be Maia, which was designed to be trained to play like humans of a given level. Trained against thousands of games between players.of given levels, Maia will play with apparently human-like oversights.

However all of these things can only imitate. Now imitation is good enough for commerce, however writing poetry is still outside their purview.

Back to your OP, my partner is an actual poet (has won awards, published work etc). Where I took a whole stanza to determine chatGPT is an awful poet, she refused to read past the second line, and had some harsh words for me for sharing it ;-).
I am familiar with the history of chess computing and have followed the development of AlphaZero specifically with chess and its Open Source (limited) counterpart, lee0.

From what I understand, Go is a much more complicated game with many more possibilities to manage than chess. Since I don't play Go all I can do is tell you that this is what I have read about it. To comment further on it would be putting on airs as if I knew more than I do. There is a lot more that I don't know about than that I do and I make no bones about that. I often wonder how some in various forums seem to be experts in so many things.

To me, chess is an ideal candidate for applying AI because it is a a finite problem. It is interesting being able to build a chess NN on my laptop even if it isn't a world changing application for AI.

Maybe the current state of AI technology is such that it can only imitate, but the fears people express (as I understand them based on what I have read) is that this won't always be true. Folks having to provide policy for managing AI will necessarily have to deal with this eventuality if their policies are to remain effective.

I would readily admit that I am far from an expert with poetry. I have not read much of it and therefore have not developed any feel for judging it. My intended purpose for posting that "poem" was to get this discussion rolling and that purpose was achieved.

In my personal experience, it is when one develops some level of expertise in something that one begins to recognize that s/he has quite limited knowledge in many other areas. I recognize that I have no expertise in poetry and therefore did not offer an opinion on the "poem" that ChatGPT generated.

It seems reasonable to me that your partner, being an actual poet, would have strong opinions regarding poetry and the quality of a given piece, just as we here do when it comes to guitars and guitar playing. Sorry your partner had harsh words for you about it though.

Tony
__________________
“The guitar is a wonderful thing which is understood by few.”
— Franz Schubert

"Alexa, where's my stuff?"
- Anxiously waiting...

Last edited by tbeltrans; 05-30-2023 at 07:37 PM.
Reply With Quote
  #62  
Old 05-30-2023, 09:05 PM
Sadie-f Sadie-f is offline
Charter Member
 
Join Date: Apr 2022
Location: New England
Posts: 1,048
Default

Quote:
Originally Posted by tbeltrans View Post
I am familiar with the history of chess computing and have followed the development of AlphaZero specifically with chess and its Open Source (limited) counterpart, lee0.

From what I understand, Go is a much more complicated game with many more possibilities to manage than chess. Since I don't play Go all I can do is tell you that this is what I have read about it. To comment further on it would be putting on airs as if I knew more than I do. There is a lot more that I don't know about than that I do and I make no bones about that. I often wonder how some in various forums seem to be experts in so many things.

To me, chess is an ideal candidate for applying AI because it is a a finite problem. It is interesting being able to build a chess NN on my laptop even if it isn't a world changing application for AI.

Maybe the current state of AI technology is such that it can only imitate, but the fears people express (as I understand them based on what I have read) is that this won't always be true. Folks having to provide policy for managing AI will necessarily have to deal with this eventuality if their policies are to remain effective.

I would readily admit that I am far from an expert with poetry. I have not read much of it and therefore have not developed any feel for judging it. My intended purpose for posting that "poem" was to get this discussion rolling and that purpose was achieved.

In my personal experience, it is when one develops some level of expertise in something that one begins to recognize that s/he has quite limited knowledge in many other areas. I recognize that I have no expertise in poetry and therefore did not offer an opinion on the "poem" that ChatGPT generated.

It seems reasonable to me that your partner, being an actual poet, would have strong opinions regarding poetry and the quality of a given piece, just as we here do when it comes to guitars and guitar playing. Sorry your partner had harsh words for you about it though.

Tony
I'd rather say that AI can obviously be applied to chess play, however as the problem of winning chess was solved 26 years ago, and straight up NNs do not so far play stronger than the strongest brute force (exhaustive search). People working on better chess engines tried again and again to apply AI methods (that is, trying to make engines that could play using the positional methods used by strong human players. Engines that tried this, always lost to exhaustive search.

Go is a different animal in that the size of its exhaustive search tree grows far faster than for chess. As I said, AlphaZero, used multiple approaches. When Deep Blue beat Kasparov, Go engines were still losing to novice human players.

I don't fear, or even care whether "Ai" eventually achieve general intelligence. I do not think NNs will be what does. I spend more time thinking about how NNs are used, and how "AIs" are treated by humans (suggested watch: ex machina). I also care that systems that so far still fundamentally rely on brute force, get called "artificially intelligent".

And now we're delving into ontology, and the fundamental questions of design. As I said in an earlier post, so called "neural networks" function because there's a way to build layers of "neurons" that can be run efficiently on engines built to do vector / linear algebra operations.

How a brain works is neurons have connections to a large number of other neurons. There is no particular ordering of who can talk to whom. In computational NNs each layer only communicates with the next, only to nearest neighbors, and generally the connections are one-way. That compiles well both in general purpose computing hardware, and especially on vector processors, such as GPUs.

Programming something that looked more like animal (or human) brains isn't tractable in any existing hardware (the huge sparse matrices that would result don't fit in any hardware we'll have anytime soon). Additionally NNs are usually trained on very large data sets : show them (many) thousands of images of handwritten characters, and they can "read" handwriting. This bears absolutely no resemblance to how humans read.

Arguably "self driving" automobiles are safer today than human drivers. However, here's a recent bug in Tesla FSD: a semi trailer looks enough like an overpass bridge that some Tesla's have tried to drive underneath semis, resulting in fatalities.

The quest for self driving was driven by a very small group of tech billionaires who invested a lot of money chasing the idea. A decade later there's little to show toward those lofty goals, and (most of) the people who championed the idea have moved on to newer, shinier things.

NNs are providing insight into a lot of difficult problems, they are being used to help understand some physics problems where there's a huge amount of data that humans have trouble comprehending. Im a fan of having a language engine in my phone, it works without having a data connection.

On the other hand, the amount of power used by these systems is huge. Again, the edge device in my phone doesn't use much more power than my previous.phone, because it's built on special purpose silicon. However there's no denying, smart phones us a lot of kWh, as do the huge server facilities that power cloud based NNs like chatGPT. Same goes for FSD, a Tesla has a couple very large application specific CPUs enabling that driving model.
Reply With Quote
  #63  
Old 05-31-2023, 12:48 AM
steelvibe steelvibe is offline
Registered User
 
Join Date: Jan 2010
Location: my father's attic
Posts: 5,792
Default

Fear and doom around every corner!! Meh.

An analogy to show how such reasoning falls short relates to animals and people. When someone says, “Machines and AI will be better or smarter than human beings,” it’s like saying, “Animals are better than humans. Cheetahs are faster. Elephants are bigger. Birds are more agile.” The problem, of course, is all of those are separate animals, and they are only “better” in separate categories. A single AI program might be “better” at chess or cooking or even making music. But for AI to be legitimately as smart as or smarter than people, a single program would need to excel in all of those things at once.

Key to understanding the idea of artificial intelligence is carefully defining terms such as intelligence; in popular depictions of AI, more common terms are variations of smart or smarter. Computers often appear to be intelligent, when in fact they are performing extremely low-level thinking extremely quickly. They aren’t actually smart; they are just capable of doing certain tasks in less time than people can. There are some tasks they cannot do at all. If a person defines intelligence in a way that eliminates concepts such as morality, emotion, empathy, humor, relationship, and so forth, then the phrase artificial intelligence is not so meaningful.

Further, even the most advanced computer still pits human intelligence against human intelligence. Just taking, for example, the defeat of the greatest chess players on earth by a machine; on one side is a single person, and on the other is a machine mechanically drawing on the collective intelligence of many people. A computer that beats people at chess or checkers or Jeopardy is not “smarter” than the people it beats. It’s just better at getting certain results according to the rules of that particular game.

Now that last bit is what is concerning and even scary to me, and many have already touched upon in this thread. This is where money can literally almost buy the "result" of anything. Just create the algorithms that weed out the less desirables and you have a winning recipe for human social and thought experiment. In my view, we have already seen that for decades without advanced technology. It's a pooling of gasoline called propaganda. How to make sure the wrong people don't hold the matches is paramount....

or maybe just shut off the power.
__________________
Don't chase tone. Make tone.
Reply With Quote
  #64  
Old 05-31-2023, 04:53 AM
tbeltrans tbeltrans is offline
Charter Member
 
Join Date: Mar 2008
Location: Twin Cities
Posts: 8,097
Default

Quote:
Originally Posted by Sadie-f View Post
I'd rather say that AI can obviously be applied to chess play, however as the problem of winning chess was solved 26 years ago, and straight up NNs do not so far play stronger than the strongest brute force (exhaustive search). People working on better chess engines tried again and again to apply AI methods (that is, trying to make engines that could play using the positional methods used by strong human players. Engines that tried this, always lost to exhaustive search.

Go is a different animal in that the size of its exhaustive search tree grows far faster than for chess. As I said, AlphaZero, used multiple approaches. When Deep Blue beat Kasparov, Go engines were still losing to novice human players.

I don't fear, or even care whether "Ai" eventually achieve general intelligence. I do not think NNs will be what does. I spend more time thinking about how NNs are used, and how "AIs" are treated by humans (suggested watch: ex machina). I also care that systems that so far still fundamentally rely on brute force, get called "artificially intelligent".

And now we're delving into ontology, and the fundamental questions of design. As I said in an earlier post, so called "neural networks" function because there's a way to build layers of "neurons" that can be run efficiently on engines built to do vector / linear algebra operations.

How a brain works is neurons have connections to a large number of other neurons. There is no particular ordering of who can talk to whom. In computational NNs each layer only communicates with the next, only to nearest neighbors, and generally the connections are one-way. That compiles well both in general purpose computing hardware, and especially on vector processors, such as GPUs.

Programming something that looked more like animal (or human) brains isn't tractable in any existing hardware (the huge sparse matrices that would result don't fit in any hardware we'll have anytime soon). Additionally NNs are usually trained on very large data sets : show them (many) thousands of images of handwritten characters, and they can "read" handwriting. This bears absolutely no resemblance to how humans read.

Arguably "self driving" automobiles are safer today than human drivers. However, here's a recent bug in Tesla FSD: a semi trailer looks enough like an overpass bridge that some Tesla's have tried to drive underneath semis, resulting in fatalities.

The quest for self driving was driven by a very small group of tech billionaires who invested a lot of money chasing the idea. A decade later there's little to show toward those lofty goals, and (most of) the people who championed the idea have moved on to newer, shinier things.

NNs are providing insight into a lot of difficult problems, they are being used to help understand some physics problems where there's a huge amount of data that humans have trouble comprehending. Im a fan of having a language engine in my phone, it works without having a data connection.

On the other hand, the amount of power used by these systems is huge. Again, the edge device in my phone doesn't use much more power than my previous.phone, because it's built on special purpose silicon. However there's no denying, smart phones us a lot of kWh, as do the huge server facilities that power cloud based NNs like chatGPT. Same goes for FSD, a Tesla has a couple very large application specific CPUs enabling that driving model.
Many interesting ideas here and thanks for taking the time to post them. I won't respond to many of them because we have gone around and in some cases, are saying the same thing from different perspectives.

I do want to address the comments about the quest for self-driving. I have worked for a number of startup companies during my career as well as quite large companies. The startups have often been the most interesting because they seem to tackle interesting projects that a larger company won't unless it is either already proven to be profitable or they have enough R & D money to just play with ideas.

I have worked on a number of projects that never saw the light of day because it turned out there just wasn't a market for them or it wasn't feasible on a large enough scale to support the company. In some cases, the market wasn't ready yet and some years later, somebody else produced a similar product and the timing was right so it went forward.

Whether self-driving cars become commonplace remains to be seen and at this point it is difficult to know whether that will eventually happen. But it is certainly not unusual that people have pursued it among the many ideas that get pursued and don't come to fruition. Angels and VCs will throw money at a number of startups with such ideas hoping that one will come to fruition and sometimes that is enough to offset those that didn't. With self-driving cars, as I recall that was larger companies with R & D dollars, but I wouldn't be surprised if the seed of that research came from a smaller company that was acquired for its technology.

The field of technology is very fast moving and we can't know all the ideas that are being developed, nor which will come to fruition. A company can be doing reasonably well with a product and then an interruptor appears and the market shifts away from that product and the company that made it either shifts with it or fades away. An example of that was a startup I worked with that was developing ATM (the 48 bit backbone protocol) to the desktop for the medical imaging and animation industries when gigabit ethernet came along. These things happen all the time.

So if some development that got a lot of press doesn't come to fruition, realize that this isn't unusual and that its time may come around again in the future under different circumstances, or maybe it just doesn't happen at all.

With regard to NNs, they have been around and commonly used for quite some time. They are nothing new, but for the general public they seem to be. As I mentioned in an earlier post in this thread, I was working with them back in the mid 1990s. As with any technology, NNs work well for certain applications and not for others. A part of the development is discovering where they are most applicable. Often a technology comes along and is a solution in search of a problem, but NNs are not one of those. They were initially developed with a problem in mind and still people are looking to where else they can be applied. That process will likely continue for quite some time yet since NNs have shown merit in some areas.

Edit: Regarding self-driving cars, that was considered back in the 1960s. A government supported think tank ran a poll among adult drivers asking their views on self-driving cars. The take-away was that if there was a malfunction, people would rather die in a traffic accident by their own hand than as a result of a computer malfunction. This poll was done before R & D commenced, so that project was never pursued until recently and I doubt that the results of that poll are even remembered. But then if such a poll were run today, the results might be very different anyway as technology has become a much more common aspect of daily life.

Tony
__________________
“The guitar is a wonderful thing which is understood by few.”
— Franz Schubert

"Alexa, where's my stuff?"
- Anxiously waiting...

Last edited by tbeltrans; 05-31-2023 at 05:36 AM.
Reply With Quote
  #65  
Old 05-31-2023, 02:09 PM
Inyo Inyo is offline
Registered User
 
Join Date: Sep 2014
Posts: 2,049
Default

I was under the impression that ChatGPT was a keyboard player that "gig"abytes, tours, with a super computer group called CPU on the motherboard circuit. Their style has a hard drive, of course. No truth to the rumor that Ram is their favorite album, though.

Last edited by Inyo; 05-31-2023 at 03:08 PM.
Reply With Quote
  #66  
Old 05-31-2023, 03:31 PM
tbeltrans tbeltrans is offline
Charter Member
 
Join Date: Mar 2008
Location: Twin Cities
Posts: 8,097
Default

Quote:
Originally Posted by Inyo View Post
I was under the impression that ChatGPT was a keyboard player that "gig"abytes, tours, with a super computer group called CPU on the motherboard circuit. Their style has a hard drive, of course. No truth to the rumor that Ram is their favorite album, though.
You nailed it perfectly.

I knew there had to be somebody who saw through all the tech talk to the real truth hiding behind the curtain.

Tony
__________________
“The guitar is a wonderful thing which is understood by few.”
— Franz Schubert

"Alexa, where's my stuff?"
- Anxiously waiting...
Reply With Quote
Reply

  The Acoustic Guitar Forum > Other Discussions > Open Mic






All times are GMT -6. The time now is 11:08 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Copyright ©2000 - 2022, The Acoustic Guitar Forum
vB Ad Management by =RedTyger=