The Acoustic Guitar Forum

Go Back   The Acoustic Guitar Forum > General Acoustic Guitar and Amplification Discussion > RECORD

Reply
 
Thread Tools
  #31  
Old 03-26-2010, 03:02 PM
shawlie shawlie is offline
Registered User
 
Join Date: Feb 2008
Posts: 2,727
Default

Again, thanks for the advice and explinations, Pokiehat.

I tried to mix a couple of things, and posted them on the "show and tell" section. It's not very good, I am afraid. I tried doubling the gutiar and using a phase offset - but couldn't figure out how. So I just made them slightly out of synch (as if it where two guitars slightly off from each other). I did the same with the my friend's voice, gave one of the tracks an effect and left the other alone (and panned them, too). The voice seemed to turn out better than the guitar, which I am having a lot of trouble with.

But I will work on it more in a few days when I am fresher (been at recording, playing and mixing for the last 12 hours, I now see) - and will try other things out. It's hard to make desicisions about things but will keep at it and keep reading about things.
__________________
a few fingerstyle country-blues and folk tunes

"Yeah!" - Blind Boy Fuller
Reply With Quote
  #32  
Old 03-26-2010, 05:16 PM
Pokiehat Pokiehat is offline
Registered User
 
Join Date: Jun 2009
Posts: 181
Default

Friendly word of advice. Don't do 12 hour marathon mixes. It will destroy your ears and you won't get a whole lot out of it because after a few hours the frustration and boredom sets in, you lose concentration, your ears are shot and you won't be thinking logically or listening critically.

Its far better to spend half an hour here and there, take frequent breaks and just make sure you are alert, fresh and ready to go when you are mixing. Then when you are on a break completely forget about it and come back when your ears are fresh. The learning curve is immense but its all simple and logical if you take it one step at a time and you do it in the right order. If you need help setting up Voxengo SPAN/Sound Delay and VST, MIDI, arming a mixer etc. give us a shout in this thread and I'll help as best I can.
Reply With Quote
  #33  
Old 03-28-2010, 02:39 AM
shawlie shawlie is offline
Registered User
 
Join Date: Feb 2008
Posts: 2,727
Default

Yes, frustration did come early - I understand it isn't productive to keep going, but it's like a new toy, and it was hard to stop...

I got the Sound Delay and Span plug-ins to work now. I re-did them (with a new guitar on the one) and tried a more simple mix of them. Still not very good, but possibly an improvement. Will keep trying, anyway.

On the sound delay - I used the guitar track twice and panned them. The left is the actual recording with a little eq off the bass. The right uses the "stereo delay" default setting (I think that gives you 10ms?) on the plug-in, with the stereo-side-chain. I also used a reverb plug-in, but it's probably too much.

I pretty much used the sound-delay on everything, though. The doubled vocals have mid-channel delay (50ms?) through the mid-side-chain (if it's called that) - it sounded a little strange with the stereo side chain. It's a pretty cool plug-in... but I'm sure I'm just abusing it...

And Span - I'm not sure what to do with it, not sure what I'm seeing with it. I left it pretty much alone, it's a lot more involved than the delay. But I'll keep looking at it - it'll have to eventually make sense, I figure.

Thanks again!
__________________
a few fingerstyle country-blues and folk tunes

"Yeah!" - Blind Boy Fuller
Reply With Quote
  #34  
Old 03-28-2010, 06:26 AM
Pokiehat Pokiehat is offline
Registered User
 
Join Date: Jun 2009
Posts: 181
Default

Span just gives you a realtime readout of amplitude over frequency. If you look back to the spectrum/eq curve of the piano I posted earlier in the thread you can see where most of the acoustic energy is concentrated. In harmonic instruments like guitar and piano you will notice it has a fundamental partial (a big spike in amplitude at the lowest frequency). Then at lower amplitudes you get a series of harmonic spikes that eventually decay.

The x axis represents a frequency range from 0hz to about 20,000hz and the ideal range of human hearing is from roughly 20hz to 20,000hz. 20hz is very low, borderline inaudible bass. 20,000hz is borderline inaudible treble. so all the sibilence and hiss and other high frequency content of the signal is towards the right side of the graph. All the booming, low bass energy is on the left side of the graph. And the middle is the nasal, 'telephone' like range.

The graph just shows you where the energy is concentrated and if you compare spectrums you can see where all the acoustic energy is in relation to one another. If theres alot of concentration in the same frequency band then something has to give.

If you get your piano and play an A (440hz fundamental) you will see a large spike pop up on the graph at 440hz and then a series of smaller 'harmonic' spikes which are mathematically related to 440. If you try the same thing with a type of instrument that is not harmonic (i.e. a snare drum) you will wont have this harmonic structure because it doesn't have a constant pitch reference. This is why you can tune a snare drum but it never sounds like you can play a 'musical note' with it. Either way, Span will tell you where the acoustic energy of the snare drum is concentrated and when you know that you are in a position to shift emphasis away from it, hollow out some room so other instruments can sound through in the same space.

Quote:
I used the guitar track twice and panned them. The left is the actual recording with a little eq off the bass. The right uses the "stereo delay" default setting (I think that gives you 10ms?) on the plug-in, with the stereo-side-chain.
I don't understand what you mean by the 'stereo sidechain' in this context. Could you explain a bit?

Quote:
I pretty much used the sound-delay on everything, though. The doubled vocals have mid-channel delay (50ms?) through the mid-side-chain (if it's called that) - it sounded a little strange with the stereo side chain. It's a pretty cool plug-in... but I'm sure I'm just abusing it...
50ms is a long delay. If I do this on a dual mono kick drum as I've just done right now, I can hear it 'flamming'. That is, I'm perceiving it as having an 'echo'. I was using delays of less than 10ms on your guitar/piano.

I also have no idea what you mean by 'mid channel' and 'mid channel sidechain'. A sidechain is just an auxiliary input. When I'm sending multiple of mixer channels to another mixer channel in order to group them together, I'm not routing these channels to auxiliary inputs. I'm literally unplugging channel 1 and channel 3 outputs from the master bus and feeding the left channel 1 and the right channel 3 outputs into the left/right channel 4 main inputs. Then making sure channel 4 output is connected to the master bus (so I hear only channel 4 on the monitor mixer).

Does this make sense? If you are having trouble lemmie know where and I'll make a youtube video or something.

Last edited by Pokiehat; 03-28-2010 at 06:32 AM.
Reply With Quote
  #35  
Old 03-28-2010, 09:20 AM
shawlie shawlie is offline
Registered User
 
Join Date: Feb 2008
Posts: 2,727
Default

Ah, thanks for the explanation about the graphs in SPAN! It does help to know a bit more of what it is showing, I think I get the idea of what it is now.

If you have a lot of build up in one area, then - you use the delays and eq to move things around? I never really understood how much or little eq to take off of things. Is a little (3db?) already enough, or does it just depend on what is going on in the song? I'll take a look at SPAN a little later, and see if I can figure things out - I do suppose time spent learning what that means might be more helpful than just messing about, like I am now. But how do you know how much build up in one are is too much build up?

And sorry, I did use the wrong words (had it installed on my laptop, but now I put it on my internet computer too). I meant to say:

When I use the pre-set "stereo delay", I get a 1 on the x10 dial (so I assume it's 10ms), then if I click on "routing" I chose "stereo side-chain". That's all I know and did with it on the guitars.

"Mid-side delay" I chose for the voice (under the pre-sets), and under "routing" chose "mid-side stereo". Then, under "group 2" I see the "x10" dial has the number 5 in it. Group 1 has no numbers on the dials.

With the voice, I hear no echo-type effect (like it's far too much delay), but it does seem "broader". But I may just be using this stuff wrong (and then just think I hear things, or hear things that have nothing to do with the "routing" I am chosing).

I am a little (well, a lot...) confused now - do you actually use/need extra hardware then? I am just using a program and earphones, every track is just playing all at the same time. Maybe I am missing the fundamental ideas altogether.

I hate taking up all your time (but appreciate the advice, of course!)
__________________
a few fingerstyle country-blues and folk tunes

"Yeah!" - Blind Boy Fuller
Reply With Quote
  #36  
Old 03-28-2010, 11:43 AM
Pokiehat Pokiehat is offline
Registered User
 
Join Date: Jun 2009
Posts: 181
Default

Ok now I know why I'm confused. I'm using a way older version and never upgraded so all that stuff is new to me.

http://www.voxengo.com/product/audiodelay/

Thats the version I'm using and its much simpler with none of that internal routing stuff (which you won't be using until you are at a very advanced stage anyway).

I fiddled with the new Sound Delay for a bit and frankly can't fathom the routing page at all or why you only have 1x row of knobs for left + right delay thus requiring you to change...something I haven't figured out yet if you want to insert a delay between left and right.

Solution: Use the older version. Its dead simple.
Reply With Quote
  #37  
Old 03-28-2010, 11:57 AM
rick-slo's Avatar
rick-slo rick-slo is offline
Charter Member
 
Join Date: Nov 2004
Location: San Luis Obispo, CA
Posts: 17,229
Default

The Voxengo Audio Delay (I have the older version with R and L volume controls also) comes in very handy. I use it all the time on stereo tracks (phase align R and L for best correlation or best sound anyway and match volume levels on R and L). Sorry to see Aleksey discontinued the simpler version but I will have to download the new version and check it out.
__________________
Derek Coombs
Youtube -> Website -> Music -> Tabs
Guitars by Mark Blanchard, Albert&Mueller, Paul Woolson, Collings, Composite Acoustics, and Derek Coombs

"Reality is that which when you stop believing in it, doesn't go away."

Woods hands pick by eye and ear
Made to one with pride and love
To be that we hold so dear
A voice from heavens above
Reply With Quote
  #38  
Old 03-28-2010, 12:09 PM
Pokiehat Pokiehat is offline
Registered User
 
Join Date: Jun 2009
Posts: 181
Default

The new version is kind of...unintuitive. Audio Delay was easy because you had what looks like this:

L - 0, 0, 0, 0 : 0, 0 - makeup gain
R - 0, 0, 0, 0 : 0, 0 - makeup gain

where the 0s correspond to 1000, 100, 10, 1 : 1/10, 1/100 milliseconds

The new version seems to have merged the Audio Delay plugin with the Latency Compensation plugin, eliminated one of the rows above and skinned the internal routing in a really quite strange manner. Left and Right input signal is called A & B for reasons I can't fathom and I could only get a stereo delay from the plugin by choosing the 'stereo delay' preset. But even this did not tell me whether the right channel was delayed behind the left channel or vice versa and it wasn't immediately obvious how to swap channels.

I think Aleksey is a very talented programmer and his freeware plugins are all awesome but his new plugins have bumped up the complexity quite alot. I'll have to go and read the manual now. :|
Reply With Quote
  #39  
Old 03-28-2010, 02:33 PM
shawlie shawlie is offline
Registered User
 
Join Date: Feb 2008
Posts: 2,727
Default

I'll download the older version tomorrow - thanks for the advice. Thought it would be harder to install those kinds of things (never thought something like "Magix" would work with plug-ins you install yourself), but it's fun trying them out. And easier is better, for me. And if I understand you right (what you say about the older version)- what I did with the new version isn't quite the point, anyway.

I did take a look more at SPAN - even noticed that the lines change if you play with the eq on some tracks. It's a start, at any rate!
__________________
a few fingerstyle country-blues and folk tunes

"Yeah!" - Blind Boy Fuller
Reply With Quote
  #40  
Old 03-28-2010, 04:54 PM
Pokiehat Pokiehat is offline
Registered User
 
Join Date: Jun 2009
Posts: 181
Default

Yes. Heres a good tip I use for finding 'problem' spots with an EQ. Turn the volume down a little bit before you do this. Bring up an instance of a paragraphic/parametric EQ and turn one of the nodes into a 'bell' filter. Now set the amplitude of the filter really high and set Q (resonance) really narrow so you have a tall, thin bell shaped curve. Sweep this bell curve slowly all the way across the audio frequency range (20hz to 20khz) and listen carefully. As you run the filter over the fundamental and harmonics you will notice massive spikes in amplitude. As you do this take a look at SPAN and watch as the fundamental and harmonic peaks get taller. Once you find a problem range (very shrill sound) simply turn the bell curve upside down (notch) and this will cut out some of the problem frequencies. These problem frequencies will most likely be harmonics. You can use this trick to kill individual harmonics if they are too shrill/dominant. Be careful doing this excessively though as the harmonic structure of instruments like guitars and pianos is important. You can make them sound very unnatural if you notch out too many harmonics or use very high amplitude/narrow q notches. Generally speaking, wide shallow EQ retains more of the natural sound of instruments by keeping the harmonic structure relatively in proportion.

Also note that if you notch out the fundamental frequency of an instrument you can change its pitch reference to the first harmonic if it sounds much louder than the fundamental.

GlissEQ is ideal for this because it has SPAN built into it and the ability to overlay up to 4 channel's worth of spectrum analysis. What I usually do is pick 2 instruments that I can feel are clashing and see both of their spectrums bouncing up and down on the same graph in different colours. Then I do the bell/notch trick by sweeping the filter over the area where I can see alot of harmonics clashing together and notch out that area on one of the instruments. That lets the other instrument peep through in that range.

There are lots of tricks you can do with this and a compressor's sidechain input. Lets go back to the guitar (channel 4). Lets clone channel 4 by sending it to channel 5 and then disconnecting this from the master bus so that channel 5 makes no sound but you can see the meters bouncing up and down. Put GlissEQ on channel 5. Now, heres where it gets interesting. Channel 5 goes into the input of GlissEQ. The output of GlissEQ goes into the sidechain input of a compressor on channel 4. In GlissEQ I dial in a high pass filter so it attenuates everything that isn't high frequency sibilance, shhhh sounds and hisssss sounds.

What this means is that the action of the compressor is now dependant on the frequency range of the EQ. This is called frequency dependant compression since theres only treble triggering the peak detector of the compressor. If you set the cutoff of the high pass filter around 6khz to 8khz you have what is called a De-esser. Basically, the compressor is now performs gain reduction only above 6khz to 8khz because it gets no other information at the input.

There are de-esser plugins but they are all actually just a high pass filter going into the auxiliary input of a compressor. Sometimes they don't look like a compressor at all but they are. Its just hidden underneath the graphical user interface.

I recommend trying this on your vocals if you find that when you sing, there are plosives or excessive hissing and shhhing when you sing consonant sounds. You can do this with any multimode filter or paragraphic EQ and a compressor with a sidechain input.

I highly encourage you to experiment with simple signal processors like simple compressors, EQs, filters, Audio Delay and Low Frequency Oscillators (LFOs). You will find that with some clever routing you can create more complex effects that have their own name like de-essers. I also recommend trying to visualize the signal chain. That is, the path that the signal takes into and out of the mixer. This was much easier to do back in the old days when software mixing/production didn't exist because you could simply follow the cables. When you do this in software you don't have any cables so it can be tricky to visualise where the signal is going but the principles are the same.

For instance, you may wonder at all the different types of effects like digital delay, chorus, flanger, phaser etc but the truth is that these effects are all based on the same thing. They work exactly like Audio Delay by introducing a time shift to the signal. The only difference is the amount of time shift such that in the case of digital delay, the shift is so long that it becomes an echo. Chorus has a much shorter time shift so its not perceived as an echo. Flangers and phasers have even shorter time shifts and their charactistic swooshing sound arises because the time shift itself is modulated (made to vary over time) by an LFO and this signal is then mixed back into the unshifted signal (wet/dry ratio). They are then summed before the output of the flanger/phaser which causes cyclical destructive phasing which is responsible for the swooshing sound.

Its all beautifully simple and logical but theres alot to learn which is where it can get overwhelming. Just take it one step at a time an focus on simple routing tricks before building up to the complicated ones. Then once you understand the principles behind things like phasers you can use phaser plugins to save you the time of having to connect all these different signal processors. Its important to understand how they work however since then you know what you need to do.

Last edited by Pokiehat; 03-28-2010 at 05:21 PM.
Reply With Quote
  #41  
Old 03-30-2010, 09:47 AM
shawlie shawlie is offline
Registered User
 
Join Date: Feb 2008
Posts: 2,727
Default

Thanks again for the good information, it's a lot but I can kind of start to follow you... in theory, anyway.

I will take a look at Gliss - just downloaded it for free, but suspect it's a demo, then - and try your trick with the curve and eq.

I did finally start to figure out how to use aux sends/bus things in my program. Still not sure why it's better to use them (or what they are, actually..) but will try to see what differences I can hear with them rather than adding the effects/plug-ins to the tracks themselves.

I'll probably be back with more questions, but have enough to keep me busy for some time, I think..!
__________________
a few fingerstyle country-blues and folk tunes

"Yeah!" - Blind Boy Fuller
Reply With Quote
  #42  
Old 05-08-2010, 09:23 AM
shawlie shawlie is offline
Registered User
 
Join Date: Feb 2008
Posts: 2,727
Default

I've been reading and trying things out, and got a couple of books about mixing (which are very interesting).

One thing now, though, that I wonder about is kind of like my last reply. I did manage to get an aux track working to send one (or more) tracks trough it.

I was wondering (and can't seem to find an answer) - why do you do it like that? Do you always use an aux for each track? Do you leave the sent track alone (no eq, no effects) and have it all on the aux? Just seems like I have to at least eq everything, or get too much bass.

And the aux level (volume) - is that just a matter of taste, or are there general guides? My program starts at a default of -40, which is not very loud.

I'm learning more about the effects - like stereo delay (which I love) and how to better use a compresser, just not sure if they're best to use on the track or the aux.

More or less- what is the point of an aux track? Couldn't you just double the original, and keep one "dry" and use the effects on the other? Is an aux just handy so you can use the same effects on several tracks, or compare effects by using a few different aux tracks? I keep reading that you should use them, but never sure why or how often.

Not that it matters that much... my singer quit on me... but I'd still like to record things with my wife (and it's a really fun and interesting thing to read and learn about I'm finding).
__________________
a few fingerstyle country-blues and folk tunes

"Yeah!" - Blind Boy Fuller
Reply With Quote
  #43  
Old 05-08-2010, 10:09 AM
Pokiehat Pokiehat is offline
Registered User
 
Join Date: Jun 2009
Posts: 181
Default

Auxiliary channels have lots of different uses. I can show you some of the things I use them for but you can be creative with using them.

1) FL Studio doesn't have fader groups. Grouping faders for example lets you select 3 channels with different levels and if you move the fader of one of the channels in the group then the level of the other 2 channels in the group move as well so their relative positions are kept the same.

Now the DAW I use most doesn't have groups so you have to achieve the same thing in a different way. I route all the channels I want to group together through an auxilliary channel and just change the volume fader position on the auxilliary channel. This does the same thing which is to change the volume of a sound or multiple sounds spread across multiple channels.

2) Send buses. Very useful when you use VST plugin effects that are very cpu/memory intensive. I use SIR Reverb which is free and sounds great but it has 2 problems: (a) it is very cpu intensive so you can't have very many instances of it running and (b) it adds 8960 samples of latency. This means that whatever you use it on will start about 20 milliseconds later than it should at 44.1khz sampling rate.

I'll address (a) and (b) separately but I'll start with (a). To reduce the number of instances of SIR that are running on my computer I set up an auxiliary (send) channel so that multiple channels can be routed through it. So I have one reverb set to 100% wet and I can have many channels with more or less of that reverb signal mixed in by sending varying amounts of it through the send bus.

3) Addressing point (b) I create auxiliary channels for something called manual Plugin Delay Compensation (PDC). Sir is an example of a plugin that adds latency so if you use it as an insert effect on a channel then that channel will be delayed by 8960 samples which means it doesn't trigger at the exact time as other channels.

To get all the channels triggering at the same time, I route the channel with the delay on it to the Master Bus directly. All other channels without a delay I route to an auxilliary channel which I refer to as a 'submix'. This submix then gets routed to the master bus. The auxiliary channel has a sample delay on it so that you can manually compensate for a plugin which introduces latency. You need to create multiple submixes to compensate for multiple plugins with different delays.

4) Sidechaining. That is, using the level of one channel to determine the magnitude of the input of another channel via an auxiliary send. There are so many creative uses for this that its hard to list them all but the kick/bass auto ducking is one use. Another is de-essing (output of a high pass filter into the auxiliary input of a compressor to get compression that only effects high frequency sound).

I'm finding some of this hard to explain in simple terms. I know it can be done because its very logical and I feel like I can show anyone how it works and they will understand it. Its not complicated but I'm not so good at explaining it.

I will say that there are several sounds that I create that consist of multiple sounds spread across multiple channels but they sound like one single sound. Like your guitar for instance which is actually 2 channels mixed in a way so that it sounds like 1 guitar. Some of my work with synths goes in this direction but much more so. For instance I'll have 3 different synths triggering and spread across 3 channels but mixed in such a way that it sounds like 1 very big, very detailed synth. And of these 3 channels some may need to be delay compensated, others may need to be routed to a send bus in varying amounts with reverb so you excite different aspects of the combined sound.

I think it helps to get away from this idea that when you hear a guitar on a record, its just one mic, one channel, one output and thats it. Many recorded rhythm guitar parts are double, triple, even quadruple tracked. Some are recorded at multiple sources (magnetic pickup, soundboard transducer and microphone) and mixed together to combine it into one sound.

Hope this helps.

Last edited by Pokiehat; 05-08-2010 at 10:34 AM.
Reply With Quote
  #44  
Old 05-08-2010, 11:05 AM
KevWind's Avatar
KevWind KevWind is offline
Charter Member
 
Join Date: Apr 2008
Location: Edge of Wilderness Wyoming
Posts: 19,947
Default

[quote]
Quote:
Originally Posted by Pokiehat View Post
Yes. Heres a good tip I use for finding 'problem' spots with an EQ. Turn the volume down a little bit before you do this. Bring up an instance of a paragraphic/parametric EQ and turn one of the nodes into a 'bell' filter. Now set the amplitude of the filter really high and set Q (resonance) really narrow so you have a tall, thin bell shaped curve. Sweep this bell curve slowly all the way across the audio frequency range (20hz to 20khz) and listen carefully.
This Is a very good tip IMO and is what I do routinely to both my gtr and vocals for subtractive eq only. I usually use a 4 band eg with narrow Q for vocal and a 7 band one for acoustic gtr. And more often than not, the problem frq's are around 500 hz - 1k and 2 k . I also use the HPF on these EQs set to start rolling off at about 145 hz IMO frequencies below this just seem to muddy the water.
__________________
Enjoy the Journey.... Kev...

KevWind at Soundcloud

KevWind at YouYube
https://www.youtube.com/playlist?lis...EZxkPKyieOTgRD

System :
Studio system Avid Carbon interface , PT Ultimate 2023.12 -Mid 2020 iMac 27" 3.8GHz 8-core i7 10th Gen ,, Ventura 13.2.1

Mobile MBP M1 Pro , PT Ultimate 2023.12 Sonoma 14.4
Reply With Quote
  #45  
Old 05-08-2010, 12:01 PM
shawlie shawlie is offline
Registered User
 
Join Date: Feb 2008
Posts: 2,727
Default

Thanks for the information, as always!

I think I follow what you are saying, mostly, and it kind of is what I thought, more or less (I hope).

Like in example (1), you are using the aux because you can't group them in another way - so you send the three tracks to the same effect, so each one will be affected instead of doing it per track?

(2) I understand, but use pretty cheap stuff I have no cpu problems. But you have one "big/fat" reverb running on the aux, and send what you want through it (so there's really only one reverb going). Then you adjust the reverb level by using the volume levels of the aux for each channel (like one track will have it at -10, another at -20, etc.)?

(3) I don't think I have latency problems with the stuff I'm trying to do, but you use that aux to fix the latency you get by using the plug-in. The rest of the explination is getting a little tricky to follow... but I'll look at it more.

(4) This I understand in theory, but have no idea how to actually get it to work. It is one thing (reading your other mixing explanations) that seems like an extremely useful thing - like how you did it with the guitar/piano in that short example you did for me. For things like reverb, too, it would seem a great thing to be able to use.

Can you do this on most software? I can get aux channels and sub-mix channels (for grouping things I suspect). How do you set up a side chain using an aux channel?

So an aux can be very useful (for the cpu problems and grouping things), but if you don't have these problems, it isn't absolutely necesary? I can see now, though, that it would seem to save time and make things easier to work with -one aux channel for reverb or delay for example, instead of messing with each individual channel each time - and just setting the levels different for the desired amount.

The side-chain I will look into more.

Your synth stuff sounds interesting - are you using more or less one type of sound for each of the three channels? You could maybe post a piece here, I'd like to hear it. I've been trying out your idea of recording a few tracks... but admit my timing is still not that good the whole way through, and there's too much echo here and there. But when it does come a bit more together, I can see it making a pretty big sound. It's something I keep in mind.

Thanks a lot for the help - gives me a some ideas how to use the aux, and really cleared up a few things I was wondering about.
__________________
a few fingerstyle country-blues and folk tunes

"Yeah!" - Blind Boy Fuller
Reply With Quote
Reply

  The Acoustic Guitar Forum > General Acoustic Guitar and Amplification Discussion > RECORD

Thread Tools





All times are GMT -6. The time now is 05:52 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Copyright ©2000 - 2022, The Acoustic Guitar Forum
vB Ad Management by =RedTyger=