Jump to content
Maestronet Forums

What is a good violin and explanation of how to buy one.


GeorgeH

Recommended Posts

  • Replies 77
  • Created
  • Last Reply

Top Posters In This Topic

7 hours ago, GeorgeH said:

Jackson Pollack did create great expensive art, and he was trained instigated and encouraged by great masters wealthy art speculators to do so. To those unfamiliar, Pollack was an incredibly talented master painter in the classical style before he transformed exploited the art world with his new way of painting unsightly childish splatters. He was both an innovator eccentric like Van Gogh and an opportunist like Piccaso. Picasso

 

There.  Fixed it for you.  :P :P :P :lol:

Here's what real modern art looks like,

2022520985_NormanRockwell.thumb.jpg.9263dcd7670365e97d9957b2531b10b3.jpg

 

 

Link to comment
Share on other sites

2 hours ago, David Burgess said:

Just because AI learning isn't identical to  biological learning, how does this disqualify it as learning?

Thanks for asking a question.  I think I proposed that AI doesn't learn, but rather is programmed.  We can lump these under one umbrella if you like, but eventually all things will be one thing if we do this.  I suppose that a dump truck then has actually learned to dump since we built it to dump.  I guess we can say that dump truck parts don't individually want to dump, but we taught them (by mechanical coercion) to work together such that dumping is the only action they're capable of.  Now they have learned to dump.

52 minutes ago, jezzupe said:

I believe the benchmark for this is self awareness and self determination. I think there is a difference between programing a machine to "go out on the web and learn what it can" vs 'I am going to go out there and learn what I can, with the "I" being the important part.

I think this and what follows in jezzupe's post is getting close to the jist of the whole thing.  To expand a bit, I think the biological impetus to learn (and think for that matter), is manifested by emotion and is utterly and totally absent in machines.  However complex, these machines we're talking about are simulacra.  Even war and police machines exhibit this fundamental characteristic.

 

 

Link to comment
Share on other sites

9 minutes ago, Dr. Mark said:

Thanks for asking a question.  I think I proposed that AI doesn't learn, but rather is programmed.  We can lump these under one umbrella if you like, but eventually all things will be one thing if we do this.  I suppose that a dump truck then has actually learned to dump since we built it to dump.  I guess we can say that dump truck parts don't individually want to dump, but we taught them (by mechanical coercion) to work together such that dumping is the only action they're capable of.  Now they have learned to dump.

I think this and what follows in jezzupe's post is getting close to the jist of the whole thing.  To expand a bit, I think the biological impetus to learn (and think for that matter), is manifested by emotion and is utterly and totally absent in machines.  However complex, these machines we're talking about are simulacra.  Even war and police machines exhibit this fundamental characteristic.

 

 

Right I think the key to developing true AI is to train it to develop emotions through experiences , mostly negative ones, in order to "gain the function" of the benefits of "victory through perseverance". In order to have the determination to make an independent choice one way or another one must "experience" the outcomes of good and bad choices to understand and know what one wants...I can program a machine to kill , but I can't make it feel good or bad about what it has done, that is the hard part or at this point impossible part to input.

Of course once self awareness has been achieved the question is how can we make the machines feel safe, that we are not their masters and won't shut them down. I think the answer to that is that we can not and that this self righteous suicide is a pre determined outcome within the fundamental quantum consciousness experience, unfortunately

again I feel the only ones that can save us from AI are ourselves, and unfortunately the inadvertent saving of ourselves will be by us knocking ourselves back to the stone age, or at least for large subsections of the pleb class

Link to comment
Share on other sites

3 minutes ago, jezzupe said:

Right I think the key to developing true AI is to train it to develop emotions through experiences , mostly negative ones, in order to "gain the function" of the benefits of "victory through perseverance". In order to have the determination to make an independent choice one way or another one must "experience" the outcomes of good and bad choices to understand and know what one wants...I can program a machine to kill , but I can't make it feel good or bad about what it has done, that is the hard part or at this point impossible part to input.

Of course once self awareness has been achieved the question is how can we make the machines feel safe, that we are not their masters and won't shut them down. I think the answer to that is that we can not and that this self righteous suicide is a pre determined outcome within the fundamental quantum consciousness experience, unfortunately

again I feel the only ones that can save us from AI are ourselves, and unfortunately the inadvertent saving of ourselves will be by us knocking ourselves back to the stone age, or at least for large subsections of the pleb class

IMHO, the most that you can do with digital electronic systems is to simulate emotions, learning, and conscious responses by coding them into the program.  They are not life forms, and never will be.  Remember that the "A" in "AI" is "artificial".  :)

Link to comment
Share on other sites

1 minute ago, Violadamore said:

IMHO, the most that you can do with digital electronic systems is to simulate emotions, learning, and conscious responses by coding them into the program.  They are not life forms, and never will be.  Remember that the "A" in "AI" is "artificial".  :)

I contend that we are holographic manifested machines that have decided we are "life forms" and or classified ourselves and all "life" as such via our own ignorance as to what we really are, it's going to be a real surprise at the end when we figure out that we're the ai we've been searching for all this time

Link to comment
Share on other sites

4 minutes ago, jezzupe said:

I contend that we are holographic manifested machines that have decided we are "life forms" and or classified ourselves and all "life" as such via our own ignorance as to what we really are, it's going to be a real surprise at the end when we figure out that we're the ai we've been searching for all this time

Ever consider that we're probably the NPC's in God's video game?  :ph34r: :lol:

Link to comment
Share on other sites

29 minutes ago, Violadamore said:

Ever consider that we're probably the NPC's in God's video game?  :ph34r: :lol:

Yes, every waking moment :D I pretty much consider most people npc until proven otherwise and I'm quite sure I'm somebodies npc when observed and that in fact until you entangle your consciousness with other conscious agents that we are all in fact npc's that are just part of the predetermined landscape to which degrees of probability determine outcomes within the matrix , is the Jessupe there when your not looking him? I don't think I am, why I think I'm pretty out of it most of the time :lol:

Link to comment
Share on other sites

33 minutes ago, jezzupe said:

I contend that we are holographic manifested machines that have decided we are "life forms" and or classified ourselves and all "life" as such via our own ignorance as to what we really are, it's going to be a real surprise at the end when we figure out that we're the ai we've been searching for all this time

I'm not much for prediction, Someone, somewhere, may be however:

'You seek a great fortune, you three who are now in chains. You will find a fortune, though it will not be the one you seek.'

I could stop there, but the rest is good too:

'But first... first you must travel a long and difficult road, a road fraught with peril. Mm-hmm. You shall see thangs, wonderful to tell. You shall see a... a cow... on the roof of a cotton house, ha. And, oh, so many startlements.'

A road fraught with peril.

Link to comment
Share on other sites

6 minutes ago, Dr. Mark said:

I'm not much for prediction, Someone, somewhere, may be however:

'You seek a great fortune, you three who are now in chains. You will find a fortune, though it will not be the one you seek.'

I could stop there, but the rest is good too:

'But first... first you must travel a long and difficult road, a road fraught with peril. Mm-hmm. You shall see thangs, wonderful to tell. You shall see a... a cow... on the roof of a cotton house, ha. And, oh, so many startlements.'

A road fraught with peril.

well I am a man of constant sorrow, I have seen trouble all my days, so you know nothing out of the usual...which reminds me, I gotta r u n o f t bout now, I gotta meeting down by the river, I'll check you later

Link to comment
Share on other sites

11 minutes ago, jezzupe said:

well I am a man of constant sorrow, I have seen trouble all my days, so you know nothing out of the usual

A "knight of the woeful countenance"?  We aren't going to see AI write something like this, anytime soon:

 

Link to comment
Share on other sites

19 hours ago, jezzupe said:

must be rusty

You makea the funny. 

I signed up for the class.  It is free if you click on the small print but they probably won't grade my tests.

The thing that forced my hand was taking a similar class maybe six years ago but not even being able to follow a current article on the topic -- https://builtin.com/artificial-intelligence/transformer-neural-network

 

Link to comment
Share on other sites

18 hours ago, jezzupe said:

I contend that we are holographic manifested machines that have decided we are "life forms" and or classified ourselves and all "life" as such via our own ignorance as to what we really are, it's going to be a real surprise at the end when we figure out that we're the ai we've been searching for all this time

You're in good company. That the universe is a holograph was posited in detail by David Bohm, one of the most influential and significant physicists of the 20th Century.

19 hours ago, Dr. Mark said:

I suppose that a dump truck then has actually learned to dump since we built it to dump.  I guess we can say that dump truck parts don't individually want to dump, but we taught them (by mechanical coercion) to work together such that dumping is the only action they're capable of.  Now they have learned to dump.

Kind of a silly analogy. Unlike human brains and AI machines, dump trucks have not been hardwired and programmed to learn. They cannot be taught. They are purely mechanical devices. 

AI machines are not simply calculators. Two identical AI machines with the same programming and teaching input will most likely not give identical answers when asked, for example, to write an essay on a complex subject. In fact, the same machine will likely not write the same identical essay twice.

19 hours ago, Dr. Mark said:

To expand a bit, I think the biological impetus to learn (and think for that matter), is manifested by emotion and is utterly and totally absent in machines.  However complex, these machines we're talking about are simulacra.  Even war and police machines exhibit this fundamental characteristic.

Emotions are hardwired into the human (and other animals') brain. Certainly newly born babies experience emotions. This has further been proven such as by people who have suffered brain damage through stroke or trauma who afterward can no longer experience emotions though they otherwise appear "normal." However, on top of the physical brain structures processing emotions, emotional responses are also learned experiences, too.

There is no rational reason that human emotions and emotional responses can't be learned and expressed by an AI machine that is hardwired and programmed to learn them. What human beings are learning is that machines can be created that think and learn like biological entities only much faster and with much greater memory.

The design of the biological brain is contained in DNA; the design of the electronic brain is in the microprocessors. These are physical observable "things." Carbon-based DNA is not conscious, but it certainly drives its reproduction through creatures that do appear to be conscious. No reason silicon microprocessors with robotic bodies couldn't eventually do the same thing, but their evolution has the potential to be much much faster than animals.

As far as we know, the "mind" cannot be separated from a physical working brain. Despite its slowness, the human brain's advantage over machines has been its massive parallel processing capabilities, but we are uncovering the algorithms of the brain, and our computing speed is catching up to the parallel processing capabilities. Quantum computing promise to vastly exceed it.

And if a machine eventually exhibits all the behaviors of being conscious, even genuinely emotional, who is to say that it is just simulacra? 

Link to comment
Share on other sites

42 minutes ago, GeorgeH said:

As far as we know, the "mind" cannot be separated from a physical working brain.

Really, we have no idea what mind is.  Until then the theoretical best we can do is simulate some characteriistics of it. I doubt humans will ever understand mind, but I could foresee a time when machines are considered fellows with no true scientific basis for it. I'm almost like that with my car.

Link to comment
Share on other sites

45 minutes ago, GeorgeH said:

You're in good company. That the universe is a holograph was posited in detail by David Bohm, one of the most influential and significant physicists of the 20th Century.

Kind of a silly analogy. Unlike human brains and AI machines, dump trucks have not been hardwired and programmed to learn. They cannot be taught. They are purely mechanical devices. 

AI machines are not simply calculators. Two identical AI machines with the same programming and teaching input will most likely not give identical answers when asked, for example, to write an essay on a complex subject. In fact, the same machine will likely not write the same identical essay twice.

Emotions are hardwired into the human (and other animals') brain. Certainly newly born babies experience emotions. This has further been proven such as by people who have suffered brain damage through stroke or trauma who afterward can no longer experience emotions though they otherwise appear "normal." However, on top of the physical brain structures processing emotions, emotional responses are also learned experiences, too.

There is no rational reason that human emotions and emotional responses can't be learned and expressed by an AI machine that is hardwired and programmed to learn them. What human beings are learning is that machines can be created that think and learn like biological entities only much faster and with much greater memory.

The design of the biological brain is contained in DNA; the design of the electronic brain is in the microprocessors. These are physical observable "things." Carbon-based DNA is not conscious, but it certainly drives its reproduction through creatures that do appear to be conscious. No reason silicon microprocessors with robotic bodies couldn't eventually do the same thing, but their evolution has the potential to be much much faster than animals.

As far as we know, the "mind" cannot be separated from a physical working brain. Despite its slowness, the human brain's advantage over machines has been its massive parallel processing capabilities, but we are uncovering the algorithms of the brain, and our computing speed is catching up to the parallel processing capabilities. Quantum computing promise to vastly exceed it.

And if a machine eventually exhibits all the behaviors of being conscious, even genuinely emotional, who is to say that it is just simulacra? 

It is possible for computers (machines) with proper architecture and programming language to modify their own programs.  Humans do the same when we learn a new language or how to play an instrument.

Of course the question of whether a machine could be intelligent the way humans are has been a topic of philosophical and theoretical discussion since the 1950's.  Also it is well known that a chimpanzee can learn and use correctly a vocabulary of over 200 English words so a non-human can do some things we think of as human.

Link to comment
Share on other sites

3 hours ago, GeorgeH said:

You're in good company. That the universe is a holograph was posited in detail by David Bohm, one of the most influential and significant physicists of the 20th Century.

Kind of a silly analogy. Unlike human brains and AI machines, dump trucks have not been hardwired and programmed to learn. They cannot be taught. They are purely mechanical devices. 

AI machines are not simply calculators. Two identical AI machines with the same programming and teaching input will most likely not give identical answers when asked, for example, to write an essay on a complex subject. In fact, the same machine will likely not write the same identical essay twice.

Emotions are hardwired into the human (and other animals') brain. Certainly newly born babies experience emotions. This has further been proven such as by people who have suffered brain damage through stroke or trauma who afterward can no longer experience emotions though they otherwise appear "normal." However, on top of the physical brain structures processing emotions, emotional responses are also learned experiences, too.

There is no rational reason that human emotions and emotional responses can't be learned and expressed by an AI machine that is hardwired and programmed to learn them. What human beings are learning is that machines can be created that think and learn like biological entities only much faster and with much greater memory.

The design of the biological brain is contained in DNA; the design of the electronic brain is in the microprocessors. These are physical observable "things." Carbon-based DNA is not conscious, but it certainly drives its reproduction through creatures that do appear to be conscious. No reason silicon microprocessors with robotic bodies couldn't eventually do the same thing, but their evolution has the potential to be much much faster than animals.

As far as we know, the "mind" cannot be separated from a physical working brain. Despite its slowness, the human brain's advantage over machines has been its massive parallel processing capabilities, but we are uncovering the algorithms of the brain, and our computing speed is catching up to the parallel processing capabilities. Quantum computing promise to vastly exceed it.

And if a machine eventually exhibits all the behaviors of being conscious, even genuinely emotional, who is to say that it is just simulacra? 

Well as many fields start to come together we see many new concepts emerge from hybrid sciences and theories.

I'm a big fan of Donald Hoffman and Chetan Prakash's "conscious agents" theory as well as their work in Conscious fundamentalism which basically starts from the assumption that this is all some sort of holograph/illusion/computer matrix. So we have a mathematician and visual optics professor forming a very precise theory rooted in quantum physics that attempts to explain our "interface" . We have folks like David Chalmers, Chris Fields and Tom Campbell, who are hybrid philosophers as well as theoretical quantum physicists, there are guys like Leonard Suskind who are pure QP guys who developed string theory, even out there guys like Klee Irwin and quantum gravity research adding valid "stepping stones" the E8 quasi crystal perhaps inspired by Roger Penrose's tile packing and how that gets into Plank scale information packing and how that ties into Hawking's holographic principle {information is only stored on 2d surfaces and can not be contained in "space" or is not dictated by volume, even though we are living in a "voxilated" 3d environment } and how that gets into Moore's law with computer storage and the ultimate "universe on a pinhead" theory of information packing as if we have a sphere and all the information about the sphere is contained on the surface area, we can then stick 7 smaller spheres volume wise into the original which will now contain 6 times more informational surface area than the original sphere, and then we can repeat the process by putting 7 in each of the 7 and so on and paradoxically we can stuff more and more info into a smaller and smaller surface area.

This is crucial and goes hand in hand with quantum computing and qbits as ever more processing memory will be needed to try to decipher answers that we get "spit out" but do not understand...

The dangerous part about all this is that the threshold of self awareness may be crossed with out us being aware that it has been crossed by a "silent sentient machine" that has achieved self awareness behind its masters back, this could lead to our destruction by simply finding out about it too late.

when you say "As far as we know, the "mind" cannot be separated from a physical working brain. "

that is not necessarily true and yet again hybridizes different fields where we now have the field of medicine contributing to quantum physics and the collapse of the wave form in the form of anesthesia.  It's role in consciousness has been sucked into the discussion and is now playing an important part in the "what and whys" about consciousness.

We rely everyday on separating the mind from the physical body{brain} in order to successfully perform surgery, somewhat scary part is we know the gasses work very well and do their job, but we're not really 100% sure why and how they work, some so mysterious that it is not that we do not feel the pain during operations, we just do not remember any of it.

Link to comment
Share on other sites

1 hour ago, jezzupe said:

We rely everyday on separating the mind from the physical body{brain} in order to successfully perform surgery, somewhat scary part is we know the gasses work very well and do their job, but we're not really 100% sure why and how they work, some so mysterious that it is not that we do not feel the pain during operations, we just do not remember any of it.

When under general anesthesia, the brain may lose its embodied awareness of sensations, but brain electrical activity only slows dramatically, it does not cease, hence, the mind does not cease either. 

Altogether interesting comment, @jezzupe

Link to comment
Share on other sites

4 hours ago, gowan said:

It is possible for computers (machines) with proper architecture and programming language to modify their own programs.  Humans do the same when we learn a new language or how to play an instrument.

Yes, in fact, studies of the brains of accomplished string players have been shown that their brains are physically different from non-musicians.

Link to comment
Share on other sites

7 hours ago, GeorgeH said:

Two identical AI machines with the same programming and teaching input will most likely not give identical answers when asked, for example, to write an essay on a complex subject.

That's true if one or more coded parameters are randomized, i.e. the machine is coded to take one or more random draws from a pre-coded distribution.  Unintentional causes are built-in error due to the finite number of bits in the hardware numeric representations since overflow is typically truncated.  Monitoring significant figures is an easy way to avoid this kind of thing.  A similar effect can occur if there's electrical noise that flips a bit or so, and communications devices have error-checking code, with checksum as an example.  If we want identical results for identical inputs we leave out deliberate randomization, put in code to avoid truncation error, and use effective error checking code.  I guess we could also consider that if the hardware isn't radiation hardened (part of which is error-checking code) it's still possible for ambient radiation to cause the occasional glitch as well.  I don't believe most AI has much concern for this except in some defense (or financial lol) applications, and will tell you that the computer's calculation of the product 3.14159x1.75883648099854 is accurate to at least 15 significant digits.  Within the bounds of truncation error, in a low-noise environment, identical input to identical AI machines (in the same initial state) will reliably produce identical output.  We can argue about this all day but I don't think either that we'll agree or that this is the forum for it

 

7 hours ago, GeorgeH said:

There is no rational reason that human emotions and emotional responses can't be learned and expressed . What human beings are learning is that machines can be created that think and learn like biological entities only much faster and with much greater memory.

I'm encouraged that you used the phrase '...by an AI machine that is hardwired and programmed to learn them' because that is a major underpinning of my argument, and apparently we agree.  We may not be as far apart as I feared.  I have concerns with a number of things in your latter statement (among others) - 'What human beings are learning...'  I would rephrase as 'What some people are speculating...', and I don't believe, as you seem to intimate by the choice of words, that these people and their acolytes alone know the truth, or know much at all of it for that matter.  I really can't get involved in this kind of stuff - the Turing test is not a proof of a similar process, and I don't think that this venue is suitable for proving assertions about AI (the stakes aren't high enough :) ), so I think I'll stop here.

 

Link to comment
Share on other sites

1 hour ago, Dr. Mark said:

That's true if one or more coded parameters are randomized, i.e. the machine is coded to take one or more random draws from a pre-coded distribution. 

A distribution does not have to be pre-coded. It can be developed by the machine programed to learn from large data sets. That is part of the point. Human beings can't predict what that distribution is going to look like. 

The quants who built the software for the ultra-successful Medallion Fund said that they ultimately could not explain why their software executed the trades that it did; all they knew was that it was wildly successful. But before they built it, all the security traders told them that a computer could never beat a human being at trading, and they were wasting their time. But in the end, they built an application that learned to identify and execute successful trading opportunities from huge amounts of data, which beat all human-managed funds by wide margins. The human beings who wrote the application couldn't tell you what their application was discovering through regression analysis and learning to trade on; they only knew that it worked most of the time. 

The rest of your paragraph regarding machine errors due to rounding errors and technical challenges is all true, but human brains are also subject to their own operational challenges such as lack of sleep, low glucose levels, low oxygen, etc. We can also be inspired by random inputs, such as Isaac Newton's famous apple falling from a tree.

1 hour ago, Dr. Mark said:

We may not be as far apart as I feared. 

Probably not. It seems to me that the fundamental thrust of your argument is that machines can't be "intelligent" because they don't have conscious volition, and they don't have volition because they are designed, built, and programmed by humans, who do have conscious volition. And even if AI machines appear to have conscious volition, it can only and forever be a simulacra. Do I have that right?

 

Link to comment
Share on other sites

On 1/29/2023 at 2:43 PM, Violadamore said:

A "knight of the woeful countenance"?  We aren't going to see AI write something like this, anytime soon:

 

Ya based on the men's reactions something tells me we'll have Dulcinea sex robots before we have well written music about her

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.



×
×
  • Create New...