Jump to content
Maestronet Forums

A nice article by Sam Z.


David Burgess
 Share

Recommended Posts

  • Replies 93
  • Created
  • Last Reply

Top Posters In This Topic

I see. :)  My tendency would be to blame the mediocre result on some sort of failure of the "everything you see" mechanism and / or an inability to properly set up the object once it's been built.  What people "see" and what's really there may exhibit huge gaps.

When I say "mediocre violin" I am thinking of various semi-dead ones. Or lacking a full tone. These defects are more to me than just adjustment. I mean gross things that cannot be fine-tuned away. You say, failure to really reproduce what you see, I think. I am saying perhaps there are things one has not anticipated and not seen/duplicated for that reason.

Hate to sound mysterioso, but I have an experiment to test an aspect of finishing that I have never heard mentioned on the forum. I will report back whatever I see.

Link to comment
Share on other sites

Me, too.  It's undoubtedly the cheapest and most readily available replacement label. [Returns to sorting rubbish, finding one "keeper" in seven or so, mostly Stainer or Amati knockoffs.]  :)  :P  :lol:

 

We ALL knew that. You being surrounded by piles of rubbish is an image forever etched in our minds through your strenuous repetition. WE GOT IT !

Link to comment
Share on other sites

Slightly off-topic. I read thru the PNAS publication of the study by Curtin et al focused on musicians comparing Old v. New violins and one take away was this:  Since it seems that musicians can't tell the difference between Old fiddles and New Makers (according to the results), the next logical step is to compare these new violins by makers such as Curtin to, say some Jay Haide, mass-produced Chinese fiddles.  If we go by the reported results, musicans won't be able to discern the difference! 

Link to comment
Share on other sites

Slightly off-topic. I read thru the PNAS publication of the study by Curtin et al focused on musicians comparing Old v. New violins and one take away was this:  Since it seems that musicians can't tell the difference between Old fiddles and New Makers (according to the results), the next logical step is to compare these new violins by makers such as Curtin to, say some Jay Haide, mass-produced Chinese fiddles.  If we go by the reported results, musicans won't be able to discern the difference! 

 

That might not be the big problem. What if they CAN tell ?  :) :) :)

Link to comment
Share on other sites

Slightly off-topic. I read thru the PNAS publication of the study by Curtin et al focused on musicians comparing Old v. New violins and one take away was this:  Since it seems that musicians can't tell the difference between Old fiddles and New Makers (according to the results), the next logical step is to compare these new violins by makers such as Curtin to, say some Jay Haide, mass-produced Chinese fiddles.  If we go by the reported results, musicans won't be able to discern the difference! 

Such instruments may have already been included among the contemporaries. They won't specify which instruments by which makers were used.

 

The experiments started not as an attempt to provide a buying guide, but to try to home in on what a good instrument is. For years, it was assumed that something like a Strad could serve as the reference standard. Then it was discovered that players and listeners might not always prefer the Strad, so more testing was done to shed more light on such issues.

 

As far as I have heard, there are no plans to turn this into "competition" between contemporary instruments.

 

.

Link to comment
Share on other sites

 Then it was discovered that players and listeners might not always prefer the Strad, so more testing was done to shed more light on such issues.

 

 

 

 

I sincerely believe that a maker of your caliber does not believe that anything was "discovered" or that more "testing" will shed any light.

And I am being serious this time. 

Link to comment
Share on other sites

I sincerely believe that a maker of your caliber does not believe that anything was "discovered" or that more "testing" will shed any light.

And I am being serious this time. 

Research is a funny thing.  You always go into an experiment expecting a certain result.  In this way you are able to design methods to test your hypothesis.  The exciting and fun part is when the unexpected occurs.  These surprise discoveries may or may not make it into the publication, but will be explored in future experiments.  Of course these are just general statements.

 

-Jim

Link to comment
Share on other sites

Research is a funny thing.  You always go into an experiment expecting a certain result.  In this way you are able to design methods to test your hypothesis.  The exciting and fun part is when the unexpected occurs.  These surprise discoveries may or may not make it into the publication, but will be explored in future experiments.  Of course these are just general statements.

 

-Jim

 

Nothing here I could disagree with. :)

Link to comment
Share on other sites

Research is a funny thing.  You always go into an experiment expecting a certain result. 

-Jim

Wrong. That is the UNSCIENTIFIC approach and tends to produce the expected result. I have encountered that situation several times in the real world. Early experiments need to be done with no expectations to get an idea of what might happen. Only after some knowledge has been accumulated is it appropriate to start forming hypotheses. And then do not discard data as "outliers" unless you know WHY it doesn't fit.

Link to comment
Share on other sites

Wrong. That is the UNSCIENTIFIC approach and tends to produce the expected result. I have encountered that situation several times in the real world. Early experiments need to be done with no expectations to get an idea of what might happen. Only after some knowledge has been accumulated is it appropriate to start forming hypotheses. And then do not discard data as "outliers" unless you know WHY it doesn't fit.

Hi Captainhook, You seem to be putting strong emphasis in the wrong part of what I wrote.  The context of the conversation I was commenting on was that there was nothing new to be discovered by the experts in previously mentioned experiments.  My point was that even when you think you know what is going on in a system, testing your hypotheses can reward you with surprises.  And yes, if I am going dedicate part of my budget on an experiment I had better have a pretty good idea of how that system will operate under the conditions I'm testing.  Also I was referring to research that has already past the pilot study phase.  Although I admit I didn't spell it out.  I never discussed statistics and usually don't because most people understand a lot less about statistics than they realize, including many of my colleagues.  Personally I very much doubt  we would be having a disagreement if we were sitting across from one another.

 

Cheers,

Jim

Link to comment
Share on other sites

Hi Captainhook, You seem to be putting strong emphasis in the wrong part of what I wrote.  The context of the conversation I was commenting on was that there was nothing new to be discovered by the experts in previously mentioned experiments.  My point was that even when you think you know what is going on in a system, testing your hypotheses can reward you with surprises.  And yes, if I am going dedicate part of my budget on an experiment I had better have a pretty good idea of how that system will operate under the conditions I'm testing.  Also I was referring to research that has already past the pilot study phase.  Although I admit I didn't spell it out.  I never discussed statistics and usually don't because most people understand a lot less about statistics than they realize, including many of my colleagues.  Personally I very much doubt  we would be having a disagreement if we were sitting across from one another.

 

Cheers,

Jim

I have the same problem. I have written quite a few responses to people who did not follow my original proposed theory/experiment.

It was my fault for not driving points home with sufficient stress and repetition. But now I have started a new thread to bang out the whole issue with no (I hope) likelyhood of misunderstanding. And there are tone implications posted after the physical argument/experiment proposal.

Link to comment
Share on other sites

My school of engineering taught to start with a hypothesis, formulate an experiment to test it, predict what the result should be, and then after the experiment see how things matched up (or not), and why.  Admittedly, that's a fairly focused test scenario, and probably not applicable to many violin-related experiments where you start with nothing resembling a legitimate hypothesis, and therefore no clue what the hell might come out of it.

Link to comment
Share on other sites

No, the study is not billed as a competition between makers, although it kind of is. It's really an attempt to see if musicians can tell the difference between old instruments a new instruments. I personally judge all instruments equally on their own merits and I'm not interested in how old they are or where they came from. However, I don't understand maker's mania for blind testing. I remember everyone talked about doing this in violin making school over and over again. Isn't the bigger question, "What are the qualities of a good playing violin? ". I think learning the answer to that question would involve interviewing and talking to musicians rather than testing them.

Link to comment
Share on other sites

My school of engineering taught to start with a hypothesis, formulate an experiment to test it, predict what the result should be, and then after the experiment see how things matched up (or not), and why.  Admittedly, that's a fairly focused test scenario, and probably not applicable to many violin-related experiments where you start with nothing resembling a legitimate hypothesis, and therefore no clue what the hell might come out of it.

  I was about to say hypothesis is at the beginning of an experiment.  I learned in high school chemistry and science that a hypothesis was needed in order to structure an experiment. 

 

Hypothesis is formed by study of, and understanding of, the previous experimentation and state of accomplishment that is current and  in any given discipline. 

 

Observation-hypothesis>experiment>conclusion-  and the experiment must be structured to be reproducible by separate labs in order to check the data and conclusion objectively.   

_______________________________________

 

By now there should be enough data, experimentation history, and independent principal investigators to have some general state of accomplishment established in violin testing? Yes, no? 

 

Or is the issue collating the data and experiment history all in one place where it can be accessed by any investigator? 

 

Or is the problem evaluating bunk investigations from useful and generally thought of as good investigation? Is there consensus on what is legitimate investigation in violin testing? 

___________________________

 

And the question of whether or not a hypothesis is legitimate is a double edged sword; An odd ball hypothesis might lead to an experiment that provides data and conclusions about which direction NOT to look. A  strange or 'illegitimate' hypothesis is time consuming and wasteful, but could save time from poor experimentation and later more fruitless investigation.

______________________

 

I'm not a scientist, but one of my best friends is a molecular biologist and we have drunk several metric tonnes of beer together so this qualifies me to make such statements.  :lol: The money we spent on pints could fund a synchrotron. *hick*  

Link to comment
Share on other sites

 The Null Hypothosis idea is also interesting and it could have a root in engineering. For example this is how one website defines the null hypothesis: 

________________________________________________________

 

Null Hypothesis

There are two types of statistical hypotheses.

  • Null hypothesis. The null hypothesis, denoted by H0, is usually the hypothesis that sample observations result purely from chance. 
     
  • Alternative hypothesis. The alternative hypothesis, denoted by H1 or Ha, is the hypothesis that sample observations are influenced by some non-random cause.

For example, suppose we wanted to determine whether a coin was fair and balanced. A null hypothesis might be that half the flips would result in Heads and half, in Tails. The alternative hypothesis might be that the number of Heads and Tails would be very different. Symbolically, these hypotheses would be expressed as

H0: p = 0.5 
Ha: p <> 0.5

Suppose we flipped the coin 50 times, resulting in 40 Heads and 10 Tails. Given this result, we would be inclined to reject the null hypothesis. That is, we would conclude that the coin was probably not fair and balanced.

_________________________________________________________________________________

 

The problem: 

 

Who decided that flipping the coin was  the best way to conduct the experiment? I know they just used that as an example, in reality in order to come up with a good null hypothesis, which might be useful in some areas of violin testing, the engineering in the experiment means a great deal as to whether or not the data which creates the NH will be valuable. 

 

But again, I'm viewing science through my beer goggles...... :P  :D

Link to comment
Share on other sites

Well at my school, miller creek junior high, we learned that if some lobbyist, bankers and military guys get together with enough money, they can prove that coca cola is good for you and that marlboro's are "refreshing and healthy", scientifically speaking of course. Thank god areospace, chemistry and alike aren;t psychology, because apperently more than half of the "scientific" research, research that dictates laws, funding and "the way it is"  can't be duplicated, I like to blather alot, but really my favorite line is from Hogans Heros...." I know nothing"   :}

Link to comment
Share on other sites

No, the study is not billed as a competition between makers, although it kind of is. It's really an attempt to see if musicians can tell the difference between old instruments a new instruments. I personally judge all instruments equally on their own merits and I'm not interested in how old they are or where they came from. However, I don't understand maker's mania for blind testing. I remember everyone talked about doing this in violin making school over and over again. Isn't the bigger question, "What are the qualities of a good playing violin? ". I think learning the answer to that question would involve interviewing and talking to musicians rather than testing them.

Eric, it looks like the explanations are largely available from your own post.

One of the objectives of blind testing is to find out how players and listeners actually rate an instrument, versus taking a poll of their preconceived  prejudices.

As you mentioned, part of one of the experiments had to do with whether players could tell (from playing and listening) if an instrument was old or new. When asked, a high number of players thought they could. The results of the experiment suggested otherwise. The same type of thing has happened with other elements of the experiments. So sometimes, testing will yield information which can't be had by asking alone.

Link to comment
Share on other sites

David, I totally agree with that. I have no problem with the spirit of inquiry and quest to learn new things through experimentation. As a matter of fact the violin makers and their community are miles ahead of the bow makers in that respect, due largely to the work of people who attend the acoustics workshop at Oberlin. In an effort to take the Curtin study seriously I read through it and all its attendant documentation and letters. We can have a discussion about the methods used and the various results, but the fact remains at least somebody is pursuing their interests in an effort to learn more.

That being said, I find there is often too large a gulf between people in the violin business. and the people they seek to serve, namely musicians. Bring a practical guy, I'm just saying, in addition to creating studies and tests, let's bring more musicians into the picture and learn from them and effort to be better at what we do .

Merci.

Link to comment
Share on other sites

The experiment is still badly flawed.  Everyone seems to underestimate/ignore the power of suggestion.

 

To the 'Strad fans'...all Strads MUST the best...and to the anti-Strad group...they would be biased against it.

 

I think what you need to do is a blind sound test with a group/s of instruments...and the testers have NO idea if there is a Strad or other Big Name amongst them.  They could all be Mendinis.  Or they could ALL be Big Names.

 

You need to repeat this experiment several times.  Ideally you would use the same room, the same players, the same audience.  ONLY the instruments would change and NO one would know ahead of time what the violins to be tested potentially were.

 

It really wouldn't be that hard to set up if anyone really wanted to. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    No registered users viewing this page.




×
×
  • Create New...