Jump to content
Maestronet Forums

Can Youtube movies learn you something about violin sound?


Anders Buen
 Share

Recommended Posts

If you are going to be doing any kind of meaningful analysis on recorded violin sound, a flat microphone must be used such as a AKG C460 or an equivalent flat response small diaphragm reference condenser mic. Then the instrument must be played in an anechoic chamber to eliminate room reflections and equalization, and then recorded using nothing more than a flat response preamplifier feeding a flat response recorder, preferrably digital based. This creates a repeatable benchmark if the instrument(s) are always played the same distance from the microphone by the same musician.

Violins like most things sound terrible in an anechoic chamber, but it reveals the true sound of the instrument uninfluenced by reverb, comb filtering, and other things that a reflective room will do to the sound.

A Radio Shack microphone, cassette recorder etc. may be used for some very rough comparisons, but I doubt the results will offer much in the way of useful information. As for evaluating violins on Youtube, you are at the mercy of the sound engineer that recorded it. A good engineer can make a lousy instrument sound somewhat acceptable, and a mediocre violin sound quite good through EQ, mic selection and placement, reverb, etc.

It is for this reason I don't put much stock in sound clips of instruments on the internet. They may be interesting to listen to, but I think there is little to be gained save for entertainment purposes.

I get your point. But you do not go to a reverberation chamber to test a violin how it sounds do you? Neither you go to an anechoic chamber. You are likely to do it in your natural environemnts your home of a music studio or practice rooms. If you can asess sounds in your natural environments using your ears you can also asess recorded music done in natural environments.

I am not so frightened by the colorisation you get from nearby reflections as that is the natural environemtn we live in, a reflective floor will always be close to you and thus a floor reflection.

A normal room will give you something in between a reverberation chamber and a en echo free room. Good enough for asessements using your ears - good enough for recordings.

The only way to figure out how sound from movies compares to the real instrument is to do analysis using both signal ways and compare them. When codecs and compression algorithms are developed they do such tests so it has already been done. But in order to figure out for sure one can do such tests. Have you done that? If so could you publish it here so you can prove that you are right?

Anders

Link to comment
Share on other sites

  • Replies 108
  • Created
  • Last Reply

Top Posters In This Topic

A good set of ears can detect nuances that the best microphone in the world cannot accurately capture.

No I am sorry, the opposite is true. Both frequency and amplitude resolution is better in good recording systems than your ears. However there is a brain behind the ears that can do things a mic and a recording device can't. It can and needs to be be trained. It will take us a long development time to beat the processing capability of the listening system. That is a lot of looking at curves and plots etc to be equally good at it as listening.

Having said this, you can indeed get details from frequency plots you cannot hear easily.

If you knock your violin bridge and listen to it, what tones do you hear?

I bet you will hear the air resonance, the strings ringing if not damped, maybe the two main wood modes but then you cannot distinguish any more. I can tell you that using an fft of that knock sound will reveal the whole set of say 50 resonances that takes part in sound production. That is more than 10 times the amount of tones and information...

Anders

Link to comment
Share on other sites

So, basically you're proving the point, when players know that strings make the difference, but the instrumentation doesn't see the difference.

The instrumentation ( I assume you mean computer, microphone etc) does see the difference just as clearly as we do. The problem is that the brain has a habit of playing around with what the ears perceive. For instance; violins produce a very feeble sound at 196Hz (Open G) Yet we can hear it very clearly when we play the violin. The reason is that the brain 'collects' all the harmonics produced by the instrument, recombines them and concludes that it is hearing an open G string. Depending on which set of harmonics are present, it will apply a tone color to the sound.

I've been advocating for some time that acoustic testing programs have an algorithm that weighs some frequencies over others in order to give a more accurate representation of what we hear.

Violin acoustics is really in a very primitive early state of development. The brain and ear trump a computer in evaluating a violin any day of the week. However, there are things that a computer can do that the ear can't (or not easily) and the potential in the future is (potentially ;-) great for advancing the art and craft of violinmaking. This is why so many successful, top violinmakers are intesly interested in the subject.

Oded

Link to comment
Share on other sites

No I am sorry, the opposite is true. Both frequency and amplitude resolution is better in good recording systems than your ears. However there is a brain behind the ears that can do things a mic and a recording device can't. It can and needs to be be trained. It will take us a long development time to beat the processing capability of the listening system. That is a lot of looking at curves and plots etc to be equally good at it as listening.

Having said this, you can indeed get details from frequency plots you cannot hear easily.

If you knock your violin bridge and listen to it, what tones do you hear?

I bet you will hear the air resonance, the strings ringing if not damped, maybe the two main wood modes but then you cannot distinguish any more. I can tell you that using an fft of that knock sound will reveal the whole set of say 50 resonances that takes part in sound production. That is more than 10 times the amount of tones and information...

Anders

The FFT however will not indicate the phase relationships between the different frequencies, and the filter sets of the FFT further skew the phases. Some things the microphone will document that the ears are lacking in detail, however things such as phase relationships, transient attack, the ear far exceeds what a microphone diaphragm(s) will render, granted the phase diffrerences are interpreted by the brain, but not in a quantifiable amount. Ultimately it's our ears we have to please.

The anechoic chamber was put forth as a repeatable comparison between instruments for analysis and documentation. This cannot be done accurately in anything other than an absorbtive environment otherwise the results are skewed by the effects of the room that we find colors the sound in a pleasing manner to our ears. I wouldn't suggest evaluating a violin's timbre by listening in an anechoic chamber. This is easily demonstrated by playing a violin in a reverberant space like a high ceiling live space like a church and playing the same instrument in a clothes closet. Which sounds more pleasing? Which one is the more accurate sound of the instrument? I'm not shunning analysis by electronic instrumentation in any way, I use this equipment myself all the time, but wave analysis colored by reverb, equalization etc. is of little use.

Comparing a waveform from a recording without knowing all the parameters of the recording setup, taking into account phase smear of the electronics, microphone placement and response, room parameters, effects processing etc, wave analysis is a moot point as it is an uncontrolled situation.

Link to comment
Share on other sites

The instrumentation ( I assume you mean computer, microphone etc) does see the difference just as clearly as we do. The problem is that the brain has a habit of playing around with what the ears perceive. For instance; violins produce a very feeble sound at 196Hz (Open G) Yet we can hear it very clearly when we play the violin. The reason is that the brain 'collects' all the harmonics produced by the instrument, recombines them and concludes that it is hearing an open G string. Depending on which set of harmonics are present, it will apply a tone color to the sound.

I've been advocating for some time that acoustic testing programs have an algorithm that weighs some frequencies over others in order to give a more accurate representation of what we hear.

Violin acoustics is really in a very primitive early state of development. The brain and ear trump a computer in evaluating a violin any day of the week. However, there are things that a computer can do that the ear can't (or not easily) and the potential in the future is (potentially ;-) great for advancing the art and craft of violinmaking. This is why so many successful, top violinmakers are intesly interested in the subject.

Oded

I agree, a bandwidth of 150 to 22Khz would do well for evaluating violin response. I believe there are some noise produced by the instrument below 196Hz that contribute to the overall sound, so a decision would have to be made to include this lower portion of the bandwidth . The computer and electronic instrumentation is superior in most cases as far as comparative analysis, not necessarily indicating which instrument is superior, but a means of evaluating what spectral content contributes towards a better sounding instrument.
Link to comment
Share on other sites

This is why so many successful, top violinmakers are intesly interested in the subject.
Which top makers are interested in this thread ?

Very short list of makers interested in acoustics:

Sam Z, Joe Curtin, Gregg Alf, Tom Croen, David Burgess, Guy Rabut, Peter and Wendy Moes, Michael Darnton, Bill Scott, David Polstein, Joe Grubaugh, Peter Goodfellow, Robin Aitchison, Terry Borman, Martin Schleske.....

Oded

Link to comment
Share on other sites

The FFT however will not indicate the phase relationships between the different frequencies, and the filter sets of the FFT further skew the phases. Some things the microphone will document that the ears are lacking in detail, however things such as phase relationships, transient attack, the ear far exceeds what a microphone diaphragm(s) will render, granted the phase diffrerences are interpreted by the brain, but not in a quantifiable amount. Ultimately it's our ears we have to please.

The anechoic chamber was put forth as a repeatable comparison between instruments for analysis and documentation. This cannot be done accurately in anything other than an absorbtive environment otherwise the results are skewed by the effects of the room that we find colors the sound in a pleasing manner to our ears. I wouldn't suggest evaluating a violin's timbre by listening in an anechoic chamber. This is easily demonstrated by playing a violin in a reverberant space like a high ceiling live space like a church and playing the same instrument in a clothes closet. Which sounds more pleasing? Which one is the more accurate sound of the instrument? I'm not shunning analysis by electronic instrumentation in any way, I use this equipment myself all the time, but wave analysis colored by reverb, equalization etc. is of little use.

Comparing a waveform from a recording without knowing all the parameters of the recording setup, taking into account phase smear of the electronics, microphone placement and response, room parameters, effects processing etc, wave analysis is a moot point as it is an uncontrolled situation.

Phase information is not at all important to determinition of timbre. Phase matters for determinition of where the sound comes from. I would recommend ASA's demonstration CD "Auditory demonstrations" wich adresses the central spsychoacoustical phenomena that matters using sound clips as examples.

When it comes to ideal rooms for use in documentation of absorption coeffissients and source strength of sound sources then the reverberation chamber matters. A reverberation chamber is an attempt to construct a room that follows Sabines formula for reverberation time, a so called "diffuse" sound field. Normal rooms and all environemtns we are used to are not ideally diffuse and we do not need the theoretical formula and sound field to be present to be able to hear and asess sound.

In an anechoic chamber it is possible to measure the sound radiation directivity around sources. It is basically an enviroment we have in free air or out on a large surface with no reflecting objects around. There is no reason to believe that such a room or environent is better than a real normal room for listening experiences. It is an odd and non natural situation listening to instruments (and playing them) in such rooms so it is not a good place to be for sound asessments. Good for certain measurements, but not good for sound asessments. You will have to mic up the room with hundreds of mics in order to ge tthe same info from the source as you will get in a normal room.

I know this because i do a lot of auralisations in room acoustical calculations and measurements. The sound files for that is usually completely dry, but they only need to be drier than the environemtn you are about to meodel. And if recorded using one mic the violin response becomes unnatural. I know this and can prove it by showing figures for it.

I have written a paper for the IOA Auditorium Acoustics Conference 2008 on "How dry does recordings for auralisation need to be?" It deals with some of the aspects you are mentioning above. Many good recording studios are not plain 'dead' If you want to look into that you can look at the picures and listen to auralisations done using Altiverb and a sound producing software. Recording studios are not echo free nor reverberation rooms. So how can that be? Are the recordings not good for reproduction??

Anders

Link to comment
Share on other sites

Good list of names there, you forgot Harris.

Which top makers are interested in this thread ?

Don't take that the wrong way, I'm sure there's merit in the analysis, but it's just something I don't bother with much.

Personal preference and the expression of such, so to speak.

Of course a good maker will be interested in 'acoustics', but that wide field can range from concert hall trauma through

to choice of strings neurosis, over to hearing aid technology, computer software, with some violin making thrown in.

I'm not gonna pretend to know much about acoustics Oded, though I do think it worthy pursuit.

As far as acoustics goes, I make a violin, string it up, play it, sell it, done.

So far I hit lucky and they sounded good.

The difference in the sound of each instrument is fascinating to me, as is the sound of each player.

Cheers.

Link to comment
Share on other sites

This must be the wrong forum for you! I am an acoustician by profession. I do not learn anything from what you are saying.

i think this forum is about as wrong or right for me as it is for you. and i did not intend that you learn anything from my posts.

back to the OP: no, there is nothing to learn from youtube movies, and they can´t be used to analyse violin sounds. i do get your idea and i like it, but it just doesn´t work that way. you need clear, defined recording setups to get verifiable data, and you don´t find that on youtube. most youtube recordings are lousy anyways, captured with bad microphones, horrible limiters etc., and you will never know what amount of processing and engineering has been done before you can put hands on it.

i value your ideas, but i honestly think you use the wrong sources.

Link to comment
Share on other sites

Phase information is not at all important to determinition of timbre. Phase matters for determinition of where the sound comes from. I would recommend ASA's demonstration CD "Auditory demonstrations" wich adresses the central spsychoacoustical phenomena that matters using sound clips as examples.

When it comes to ideal rooms for use in documentation of absorption coeffissients and source strength of sound sources then the reverberation chamber matters. A reverberation chamber is an attempt to construct a room that follows Sabines formula for reverberation time, a so called "diffuse" sound field. Normal rooms and all environemtns we are used to are not ideally diffuse and we do not need the theoretical formula and sound field to be present to be able to hear and asess sound.

In an anechoic chamber it is possible to measure the sound radiation directivity around sources. It is basically an enviroment we have in free air or out on a large surface with no reflecting objects around. There is no reason to believe that such a room or environent is better than a real normal room for listening experiences. It is an odd and non natural situation listening to instruments (and playing them) in such rooms so it is not a good place to be for sound asessments. Good for certain measurements, but not good for sound asessments. You will have to mic up the room with hundreds of mics in order to ge tthe same info from the source as you will get in a normal room.

I know this because i do a lot of auralisations in room acoustical calculations and measurements. The sound files for that is usually completely dry, but they only need to be drier than the environemtn you are about to meodel. And if recorded using one mic the violin response becomes unnatural. I know this and can prove it by showing figures for it.

I have written a paper for the IOA Auditorium Acoustics Conference 2008 on "How dry does recordings for auralisation need to be?" It deals with some of the aspects you are mentioning above. Many good recording studios are not plain 'dead' If you want to look into that you can look at the picures and listen to auralisations done using Altiverb and a sound producing software. Recording studios are not echo free nor reverberation rooms. So how can that be? Are the recordings not good for reproduction??

Anders

I think you are missing my point; a recording made for analysis and comparison is very different than one made for listening enjoyment. An analysis recording as I am referring to is used for spectral content analysis, and a room with any reflections is going to augment or diminish certain frequencies depending upon the room acoustics. At this point the microphone is not only responding to the instrument, but reverberant modes of the room as well, thus skewing the results. As I am sure you are well aware, very careful room treatment and design is implemented in recording studio control rooms and recording spaces to achieve a sonically neutral listening environment enabling an engineer to make intelligent and subjective decisions on spectral content of the mix. If the room had a resonant peak at say 250Hz, he would tend to attenuate this portion of the spectrum with the end result of putting a notch in the finished product at 250Hz.

All rooms have resonances defined by the shape and dimensions of the room, unless great attention is put towards design and treatment to eliminate them. Record a violin for spectral analysis in a room with a 315 Hz resonant peak, and this will show up in your spectral content as an unusually high peak in the violins response, even though the violin didn't create the peak. Further, the peak will vary according to the phase relationship between the violin source and the wave in the room according to where the microphone is placed. Is this the method of analysis you are suggesting?

Link to comment
Share on other sites

I think you are missing my point; a recording made for analysis and comparison is very different than one made for listening enjoyment. An analysis recording as I am referring to is used for spectral content analysis, and a room with any reflections is going to augment or diminish certain frequencies depending upon the room acoustics. At this point the microphone is not only responding to the instrument, but reverberant modes of the room as well, thus skewing the results. As I am sure you are well aware, very careful room treatment and design is implemented in recording studio control rooms and recording spaces to achieve a sonically neutral listening environment enabling an engineer to make intelligent and subjective decisions on spectral content of the mix. If the room had a resonant peak at say 250Hz, he would tend to attenuate this portion of the spectrum with the end result of putting a notch in the finished product at 250Hz.

All rooms have resonances defined by the shape and dimensions of the room, unless great attention is put towards design and treatment to eliminate them. Record a violin for spectral analysis in a room with a 315 Hz resonant peak, and this will show up in your spectral content as an unusually high peak in the violins response, even though the violin didn't create the peak. Further, the peak will vary according to the phase relationship between the violin source and the wave in the room according to where the microphone is placed. Is this the method of analysis you are suggesting?

You are aware that I am an acoustics consultant with a speciality in room acoustics measurements, design and calculations? I have done a fair amount of violin acoustics measurements the last 15 years. I have only once come across a situation where a room mode did appear in a frequency response of 'a violin'. By comparing respones from many instruments a peak at the same place will be a candidate.

There are always room modes present, but they are more seldom than the rule a problem. And they even more eldom are a problem in the playing range of the violin, that is above 200Hz. Recording are generaly done in the near field so the reverberation from the room is also less of a problem than it would be if the mic was distant. The reflections from the room that adds up to the microphine signal is normally so dense and complex that the phase of a single reflection to the direct signal is uninteresting in most cases. I thus assume that the room is of a normal small size and that the mic is not closer to a surface than say 1m or so.

Link to comment
Share on other sites

i think this forum is about as wrong or right for me as it is for you. and i did not intend that you learn anything from my posts.

back to the OP: no, there is nothing to learn from youtube movies, and they can´t be used to analyse violin sounds. i do get your idea and i like it, but it just doesn´t work that way. you need clear, defined recording setups to get verifiable data, and you don´t find that on youtube. most youtube recordings are lousy anyways, captured with bad microphones, horrible limiters etc., and you will never know what amount of processing and engineering has been done before you can put hands on it.

i value your ideas, but i honestly think you use the wrong sources.

Every investigation is a journey like the one I have given an example of. You have to do it to be able to talk about it. The trend in the data is fairly clear, student instruments have a weker low frequency response than the top violins. That has nothing to do with the recording equipment. It is the violins. At least the recordings done with the top players are done with top of the shelf equipment of the time, I would believe. There has been a conversion to the movies with some processing. But still my point is that this can be done, you can link what you hear to what you get out of the spectra. I do not think you are right that the equipment is so bad, not even in the amateur recordings of the student violinists. It is likely to be better than the top notch equipment from the 60ties e.g.

Link to comment
Share on other sites

You are aware that I am an acoustics consultant with a speciality in room acoustics measurements, design and calculations? I have done a fair amount of violin acoustics measurements the last 15 years. I have only once come across a situation where a room mode did appear in a frequency response of 'a violin'. By comparing respones from many instruments a peak at the same place will be a candidate.

There are always room modes present, but they are more seldom than the rule a problem. And they even more eldom are a problem in the playing range of the violin, that is above 200Hz. Recording are generaly done in the near field so the reverberation from the room is also less of a problem than it would be if the mic was distant. The reflections from the room that adds up to the microphine signal is normally so dense and complex that the phase of a single reflection to the direct signal is uninteresting in most cases. I thus assume that the room is of a normal small size and that the mic is not closer to a surface than say 1m or so.

I have to disagree with you. I have a small 1600 Sq.ft. 24 track recording facility and the difference in sound before I acoustically treated the rooms and after is very strikingly different , especially in the fundamentals range of 150 to 300 Hz. Did the instrument change? No, Did the equipment change? No. So what rendered a different sound? The room.

If you feel that encouraging someone to use a radio Shack microphone plopped in front of a violin in a corner of the basement will render any meaningful results, I beg to differ. I'm not trying to get into an argument here, but I think you are drastically playing down how the room acoustics contribute to the way a violin, or any instrument or voice projects. Ignoring room resonance doesn't mean it doesn't exist or play a significant role in the resultant sound a microphone or a pair of ears will hear. You can do the clothes closet versus bathroom recording test using the same instrument and microphone/recording setup, and a child can tell you it sounds different.

Link to comment
Share on other sites

I have to disagree with you. I have a small 1600 Sq.ft. 24 track recording facility and the difference in sound before I acoustically treated the rooms and after is very strikingly different , especially in the fundamentals range of 150 to 300 Hz. Did the instrument change? No, Did the equipment change? No. So what rendered a different sound? The room.

If you feel that encouraging someone to use a radio Shack microphone plopped in front of a violin in a corner of the basement will render any meaningful results, I beg to differ. I'm not trying to get into an argument here, but I think you are drastically playing down how the room acoustics contribute to the way a violin, or any instrument or voice projects. Ignoring room resonance doesn't mean it doesn't exist or play a significant role in the resultant sound a microphone or a pair of ears will hear. You can do the clothes closet versus bathroom recording test using the same instrument and microphone/recording setup, and a child can tell you it sounds different.

No. The general sound level is different in different rooms, but you do not get a large difference in the near field of a sound source by adding acoustical treatments to the room. The near field sound is dominated by the direct sound wich is not altered. You probaly will be able to hear if a room is small or large, but so what? The sound that comes back from the room comes from the violin anyway. The general sound timbre can be asessed in a normal room like you are able to recognize the voices of persone in any environment.

There is just too much snobbbery about sound production. (In fact it is almost all snobbery)

So you do hear the difference in the sound on the first three played notes on the G string? First of all your fundamental is very weak there before it passes the air resonance at say 275Hz or so. You relly need many dBs change to be able to hear the effect on violin timbre.

I know that sound treatment makes a room sound different, but that is in the reverberation. You remove flutter or things like that. But there is limitations to how many dB changes you can get to the G-factor. 10 dB is the absolute possible theoretical effect that is achievable, normally some 2-3 dB is what you can get in practice. A 2-3 db change to the fundamental of notes played on the G string is not likely to be detected. [Reference is Claudia Fritz and Jim Woodhouses article in JASA last year on that subject] And you need to change the reverberation time to the half value to get a 3 dB change in sound level. You detect a change of about 5% to the reverberation time, pretty sentitive but you really need big changes in order to hear the effect on the room gain factor. An moreover it is much more difficult to achieve that in the bass than in the mid and high frequencies. I am pretty good at dealing with modes in the bass region, by tuning slatted panels and membrane absorbers for the purpose. But you need 'tons' of it to get effects large enough to make it work well.

I think you will need to post the sound examples from before and after the treatment here so you can prove your point.

Link to comment
Share on other sites

I'm not a physicist but I do a lot of recording. As I understand it you record the frequency that is there and any frequency that is absent cannot be added by way of equalization. However, the actual volume of the individual frequencies in a recording depends on other factors. This may be equalization or lack of at the engineering stage, compression, digital conversion, microphones, equipment and so forth. If a frequency exists in a recording it can be boosted, lowered or cut.

So to look at a spectrograph and say 'this violin has a low reading at 200hz' may be telling you more about the recording. I'm not surprised a Luis and Clarke violin sounds weaker at the low end than all of the professionally recorded examples you give seeing as it was recorded on a video camera probably with severe compression.

Link to comment
Share on other sites

I'm not a physicist but I do a lot of recording. As I understand it you record the frequency that is there and any frequency that is absent cannot be added by way of equalization. However, the actual volume of the individual frequencies in a recording depends on other factors. This may be equalization or lack of at the engineering stage, compression, digital conversion, microphones, equipment and so forth. If a frequency exists in a recording it can be boosted, lowered or cut.

So to look at a spectrograph and say 'this violin has a low reading at 200hz' may be telling you more about the recording. I'm not surprised a Luis and Clarke violin sounds weaker at the low end than all of the professionally recorded examples you give seeing as it was recorded on a video camera probably with severe compression.

Compression really does make sense for high frequencies rather than at low. The weak low frequency response of the Luis and Clark is likely to be caused by the construction and the design being on the stiff side.

Anders

Link to comment
Share on other sites

I had the similar idea of this subject... there are recordings available of fabulous instruments, but it is beyond my hope to get my hands on them to make some acoustic measurements. So, how can you tell anything with all those notes being played, the musician intentionally playing weaker notes stronger and backing off on the strong resonances, and all that other stuff?

Maybe there's a way. If you notice, there's a little atonal "snick" when the violinist begins to play a note. Essentially, that is an impulse going into the instrument, which should be responding with all of its body modes. You might also notice that the "snick" sounds the same no matter what string or note is being played. If you could snip out that tiny sound before the string builds up its tone, and do a FFT on it, in theory you should get a pretty good picture of the instrument response (colored by the room and mike, but there's not much to be done about that). Hopefully, at least you could find the lowest couple of body modes of the instrument, which would be of some interest. Maybe you'd have to snip out a bunch of those "snicks" to get a better statistical sampling.

I should mention that I tried this on a recording of Itzhak Perlman, and didn't get what I had hoped for. However, my equipment and skill and determination might be to blame rather than the theory itself. Anyone else care to try this??

By the way, I have used a couple of FFT programs, SpectraPlus (not too bad) and the one by Ed Glass (he calls it SoundSnip). Soundsnip is the simplest to use and specifically tailored to what instrument makers might want to see. However, he is a busy doctor by day, and this program is not always at the top of his priority list. And it ain't free (but it's reasonable).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.



×
×
  • Create New...