The Acoustic Guitar Forum

Go Back   The Acoustic Guitar Forum > General Acoustic Guitar and Amplification Discussion > RECORD

Reply
 
Thread Tools
  #16  
Old 09-04-2023, 11:19 AM
KevWind's Avatar
KevWind KevWind is online now
Charter Member
 
Join Date: Apr 2008
Location: Edge of Wilderness Wyoming
Posts: 19,315
Default

Quote:
Originally Posted by TBman View Post
In Reaper, when you want to save/export the "mastered" sound file to wav, mp3, etc it is "Rendered" to the device's storage media. That's the menu item name of the process.
I know which is why I said Most DAWs. And unfortunately highlights one big drawback = that different DAWs sometimes use different terms for the same function

I decided not to go there and edited that post
__________________
Enjoy the Journey.... Kev...

KevWind at Soundcloud

KevWind at YouYube
https://www.youtube.com/playlist?lis...EZxkPKyieOTgRD

System :
Avid Carbon interface , PT Ultimate 2023.6 -Mid 2020 iMac 27" 3.8GHz 8-core i7 10th Gen ,,128GB 2666MHz DDR4 RAM,,2TB SSD storage,Radeon Pro 5700 XT16GB Ventura 13.2.1

Last edited by KevWind; 09-04-2023 at 03:59 PM.
Reply With Quote
  #17  
Old 09-04-2023, 11:48 AM
Doug Young's Avatar
Doug Young Doug Young is offline
Charter Member
 
Join Date: Apr 2005
Location: Mountain View, CA
Posts: 9,712
Default

Quote:
Originally Posted by kurth View Post
Doug ...here's some arbitrary threads found googling that discuss 'summing algorithms' so there does seem to be a grey area of some interest.
https://gearspace.com/board/music-co...ing-angle.html ...and another https://www.kvraudio.com/forum/viewtopic.php?t=414209 .... and I agree about players and environment...even the headphones someone uses change the sound. But I hear a slight loss of dynamics, not just volume, and it's not my daw per se. I heard it on garageband, on logic pro 9 , and on ableton. That's the reason I started using ozone 9. It seems to be able to 'sum' the parts where more dynamic response is saved. thanks
Yes, summing is one of those black art areas where lots of magic happens :-)

BUT, I dont see how that applies. You're playing a multi-track back, it's going thru whatever summing the DAW supports. Now you export/render/save, whatever you call it, and it also goes thru that same summing process (hopefully?). if the results aren't the same, I'd argue there's a bug or a mis-design in the DAW, or something off in your workflow that you're not saving exactly what you are listening to. How can you do a mix if what you hear isn't what gets produced in the end?

For what it's worth, my mixes in Logic sound exactly the same both before and after a bounce *IF* I listen at the same levels. The comment that there's a difference in dynamic range suggests you may be listening at different, even very subtly different, levels.

Aside from the AB test I suggested, another step in exploring/diagnosing this would be to put a test tone at the beginning of your track. Now check the levels with a sound meter, and set both the DAW mix and the rendered file to the same level during that test tone, say 80db. See if they still sound different. If they're reliably different, something's wrong. Could be a bug in the DAW, something wrong about your export process, something in your setup, etc. As I say, there's lots of software involved in these paths, and I've never seen completely bug-free software, so anything's possible, but I suspect something else is going on. If any of the well-known DAWs were failing to render a mix accurately in a predictable way, the bug would have been fixed, or people would have stopped using that DAW long ago.

This could get way too long, but I have a really funny story about someone who claimed that they could hear a difference in 2 identical, bit-for-bit the same, audio files. Perception can fool all of us easily.
Reply With Quote
  #18  
Old 09-04-2023, 04:38 PM
runamuck runamuck is online now
Registered User
 
Join Date: Jul 2006
Posts: 2,203
Default

Quote:
Originally Posted by kurth View Post
... I hear the difference in pre-rendered and rendered. .
Do you by any chance have a room correction plugin on your stereo out buss? Something like Sonorworks?
Reply With Quote
  #19  
Old 09-04-2023, 05:01 PM
KevWind's Avatar
KevWind KevWind is online now
Charter Member
 
Join Date: Apr 2008
Location: Edge of Wilderness Wyoming
Posts: 19,315
Default

Quote:
Originally Posted by kurth View Post
But wouldn't the pre-rendered daw file be playing each of the wave files for each stem separately, and the rendered file would be those stems each coalesced into another format like wave or mp3 using the particular algorithm of each daw for mixing down, ? For example, your compu will work alot harder playing a 10 track wav daw file, than it will playing a mixed down wav file of the same song. It looses something to my ears. And at times somethings come to the forefront that were not as noticable. I have to actually listen to a mixdown to know if it's right. Whatever that mixdown algorithm does...to me it sounds like it does something. Just my 2cents.
I had started to post what I thought should be happening but there is a lot to unpack in your statement so decided to run some tests first and report back

Also to clarify and make sure we are on the same page

#1 I am assuming the when you say "render" you a talking about exporting or bouncing out of the DAW (as per what TBman said) and then when listening back, something sounds different to you ? is that correct ?

So first in order to eliminate as many variables as possible of file format change or different playback devices

Lets assume for the sake of discussion that ::
The least amount variables would be rendering/exporting/bouncing in the same file type that it was recorded in
So lets the WAV file type

And agin for the least amount of variables lets say we will import the rendered/exported/bounced WAV file back into the session so that eliminates any variable of file type change, and or any variable of playback device difference.

So that is what I did :: In Pro tools I record in WAV file format

I opened a finished session muted all the tracks except for the first 25 seconds of the rhythm acoustic guitar part (which was recorded on two mono tracks panned hard left and right)

I then did several different export options for that section

First I bounced it as an Interleaved file
Then as summed mono
and also as a multi-mono

Then I imported them all back into the session as additional audio tracks

Also I printed that guitar section within the DAW to a stereo audio track

In all cases when I reversed the polarity on them they nulled (and the all the audio went silent) , to both the playback happening within the DAW (i.e. the un-rendered/bounced playback) as well as they all nulled with printed track that had stayed in the DAW

Now interestingly on the multi- mono import tracks I had to adjust= lower the level, almost 3 db to get it to null because before that adjustment I could hear just a faint amount of signal
__________________
Enjoy the Journey.... Kev...

KevWind at Soundcloud

KevWind at YouYube
https://www.youtube.com/playlist?lis...EZxkPKyieOTgRD

System :
Avid Carbon interface , PT Ultimate 2023.6 -Mid 2020 iMac 27" 3.8GHz 8-core i7 10th Gen ,,128GB 2666MHz DDR4 RAM,,2TB SSD storage,Radeon Pro 5700 XT16GB Ventura 13.2.1
Reply With Quote
  #20  
Old 09-04-2023, 07:39 PM
kurth kurth is offline
Registered User
 
Join Date: Apr 2021
Posts: 645
Default

Quote:
Originally Posted by KevWind View Post
I had started to post what I thought should be happening but there is a lot to unpack in your statement so decided to run some tests first and report back

Also to clarify and make sure we are on the same page

#1 I am assuming the when you say "render" you a talking about exporting or bouncing out of the DAW (as per what TBman said) and then when listening back, something sounds different to you ? is that correct ?

So first in order to eliminate as many variables as possible of file format change or different playback devices

Lets assume for the sake of discussion that ::
The least amount variables would be rendering/exporting/bouncing in the same file type that it was recorded in
So lets the WAV file type

And agin for the least amount of variables lets say we will import the rendered/exported/bounced WAV file back into the session so that eliminates any variable of file type change, and or any variable of playback device difference.

So that is what I did :: In Pro tools I record in WAV file format

I opened a finished session muted all the tracks except for the first 25 seconds of the rhythm acoustic guitar part (which was recorded on two mono tracks panned hard left and right)

I then did several different export options for that section

First I bounced it as an Interleaved file
Then as summed mono
and also as a multi-mono

Then I imported them all back into the session as additional audio tracks

Also I printed that guitar section within the DAW to a stereo audio track

In all cases when I reversed the polarity on them they nulled (and the all the audio went silent) , to both the playback happening within the DAW (i.e. the un-rendered/bounced playback) as well as they all nulled with printed track that had stayed in the DAW

Now interestingly on the multi- mono import tracks I had to adjust= lower the level, almost 3 db to get it to null because before that adjustment I could hear just a faint amount of signal
Kev...when you mix down a track....the program takes time to make the file. That's rendering. It's creating a sum total of the individual parts. I mess with how that process sums, esp in the master track. Some actions take moretime to render. As well I think I see the test you're proposing , and using a video analogy, you rendered back a negative mirror image and overlaid them and they were empty ie null. They canceled each other out...except for one type. Interesting although wouldn't all the file types not only have to be the same but as well the sampling depth? All I know is that I can master down to pretty close, but to do the critical listen, i've got to use the output file. Mp3 or wav flac or whatever. That means I usually end up outputting more than one time. And since that file would be a master streaming file, it works for arriving at a finished product. Takes only a small amount of time. I have no idea what's causing it, but I hear the difference more over my speakers than headphone. and Doug is absolutely right that other factors overshadow it. Someone listens using some airpods ...different game. If i share a file with someone, I usually say listen with vlc and headphones, just to put us in the same ballpark. thanks
__________________
Goya g10, Yamaha CN525E, 10string classical, Babilon Lombard N, Ibanez GA5TCE
Alvarez a700 F mandolin, Epiphone Mandobird
Ovation 12 string 1515
Takamine F349, Takamine g340, Yamaha LL6M
'78 Fender Strat
Univox Ultra elec12string
Lute 13 strings
Gibson Les Paul Triumph Bass
Piano, Keyboards, Controllers, Marimba, Dusty Strings harp
Reply With Quote
  #21  
Old 09-04-2023, 10:08 PM
Doug Young's Avatar
Doug Young Doug Young is offline
Charter Member
 
Join Date: Apr 2005
Location: Mountain View, CA
Posts: 9,712
Default

Quote:
Originally Posted by kurth View Post
All I know is that I can master down to pretty close, but to do the critical listen, i've got to use the output file. Mp3 or wav flac or whatever.
So, which format are you listening to when you say it sounds different?. MP3, we know will be degraded. The others, maybe, depending on your settings. To sound the same, you need to export in full quality, the same as what's playing in the DAW. You also mention streaming services, and that's a whole different ballgame. Anything I release on Spotify, for example, is noticeably worse - they're streaming a low bit rate, and likely also compressed. Even with You Tube, which is quite good these days compared to what it used to be, the quality you hear is not what I uploaded, because again, they lower the bit rate for streaming and probably compress.

When we're saying the rendered result should sound the same as the playback in the DAW, you have to be listening to a file with the same bit depth as you're working with in the DAW, no compressed file formats, etc. Output as at least 24 bit, uncompressed WAV or AIFF. Even with FLAC, which in theory supports a full-quality audio file, you have to play that back in a flac player, right? So now you're listening to a different playback engine, which may introduce its own changes.
Reply With Quote
  #22  
Old 09-05-2023, 07:49 AM
KevWind's Avatar
KevWind KevWind is online now
Charter Member
 
Join Date: Apr 2008
Location: Edge of Wilderness Wyoming
Posts: 19,315
Default

Quote:
Originally Posted by kurth View Post
Kev...when you mix down a track....the program takes time to make the file. That's rendering. It's creating a sum total of the individual parts. I mess with how that process sums, esp in the master track. Some actions take moretime to render. As well I think I see the test you're proposing , and using a video analogy, you rendered back a negative mirror image and overlaid them and they were empty ie null. They canceled each other out...except for one type. Interesting although wouldn't all the file types not only have to be the same but as well the sampling depth? All I know is that I can master down to pretty close, but to do the critical listen, i've got to use the output file. Mp3 or wav flac or whatever. That means I usually end up outputting more than one time. And since that file would be a master streaming file, it works for arriving at a finished product. Takes only a small amount of time. I have no idea what's causing it, but I hear the difference more over my speakers than headphone. and Doug is absolutely right that other factors overshadow it. Someone listens using some airpods ...different game. If i share a file with someone, I usually say listen with vlc and headphones, just to put us in the same ballpark. thanks
I think there is some ambiguous specific terminology going on.

I don't know what your interpretation of "mix down a track" is.

For the sake of communication if only in this discussion it would help to simply try to specify the terms for clarity

So for the sake of discussion lets just say we are talking about a multi instrument project/session with multiple guitars and vocals etc.

#1 "Track" a track is a single specific recorded element (either mono or stereo) for a specific instrument or voice and is also usually represented by a channel strip in the mixer window. (Yes I realize that "track" is often the slang term for the entire project/session but lets not use it as that here )

#2 a "Mix" is multiple tracks

#3 "Mixing" is the process of setting levels and panning and often adding plugin effects (FX)

#4 Mixing in most DAW's is in real time (nondestructive = does not change the original recorded file), since the original recorded files are already made during recording , "Mixing" them does not involve "the program takes time to make a file" and summing them (obviously the sum total) for two channel playback while still within the DAW also does not involve "a new file" being made

#5 When you then export the file out of the DAW that is when a new file is made, and yes also involves summing and yes that is when the "rendering" takes place.


Quote:
Interesting although wouldn't all the file types not only have to be the same but as well the sampling depth?
Yes exactly ---in order to actually know if "rendering" or exporting or bouncing, does actually change or lessen the sound YOU ABSOLUTELY MUST stay in the same file type and sample rate as what it was recorded with , in the DAW . Other wise you are not comparing apples to apples --You are comparing apples to oranges which will not tell you IF the rendering process changes the sound because you render to a different file type have added an invalid variable for comparison

The discussion as to wether or not rendering/exporting to a different file type like say from a WAV recorded in the DAW to Mp3, or FLAC, or ACC , detracts from the sound of the initial recorded WAV file is a different discussion from the discussion as wether rendering itself detracts from the sound (which is what Doug and I we were talking about in this thread)

In other words it is not the "summing" or "rendering" that is effecting (or lessening ) the sound, it is changing to different file type causing the issue
__________________
Enjoy the Journey.... Kev...

KevWind at Soundcloud

KevWind at YouYube
https://www.youtube.com/playlist?lis...EZxkPKyieOTgRD

System :
Avid Carbon interface , PT Ultimate 2023.6 -Mid 2020 iMac 27" 3.8GHz 8-core i7 10th Gen ,,128GB 2666MHz DDR4 RAM,,2TB SSD storage,Radeon Pro 5700 XT16GB Ventura 13.2.1

Last edited by KevWind; 09-05-2023 at 08:18 AM.
Reply With Quote
  #23  
Old 09-05-2023, 10:04 AM
kurth kurth is offline
Registered User
 
Join Date: Apr 2021
Posts: 645
Default

Quote:
Originally Posted by Doug Young View Post
So, which format are you listening to when you say it sounds different?. MP3, we know will be degraded. The others, maybe, depending on your settings. To sound the same, you need to export in full quality, the same as what's playing in the DAW. You also mention streaming services, and that's a whole different ballgame. Anything I release on Spotify, for example, is noticeably worse - they're streaming a low bit rate, and likely also compressed. Even with You Tube, which is quite good these days compared to what it used to be, the quality you hear is not what I uploaded, because again, they lower the bit rate for streaming and probably compress.

When we're saying the rendered result should sound the same as the playback in the DAW, you have to be listening to a file with the same bit depth as you're working with in the DAW, no compressed file formats, etc. Output as at least 24 bit, uncompressed WAV or AIFF. Even with FLAC, which in theory supports a full-quality audio file, you have to play that back in a flac player, right? So now you're listening to a different playback engine, which may introduce its own changes.
perhaps that's it cause I'm mixing down to a 16bit wave file for streaming master. Next song I'll try mixing down to uncompressed wave file and see if it sounds different. thanks
__________________
Goya g10, Yamaha CN525E, 10string classical, Babilon Lombard N, Ibanez GA5TCE
Alvarez a700 F mandolin, Epiphone Mandobird
Ovation 12 string 1515
Takamine F349, Takamine g340, Yamaha LL6M
'78 Fender Strat
Univox Ultra elec12string
Lute 13 strings
Gibson Les Paul Triumph Bass
Piano, Keyboards, Controllers, Marimba, Dusty Strings harp
Reply With Quote
  #24  
Old 09-05-2023, 10:09 AM
kurth kurth is offline
Registered User
 
Join Date: Apr 2021
Posts: 645
Default

Kev ....'mixing down' is exporting in macworld. and yes it might be due to different file types. i'll do a test and see. thanks
__________________
Goya g10, Yamaha CN525E, 10string classical, Babilon Lombard N, Ibanez GA5TCE
Alvarez a700 F mandolin, Epiphone Mandobird
Ovation 12 string 1515
Takamine F349, Takamine g340, Yamaha LL6M
'78 Fender Strat
Univox Ultra elec12string
Lute 13 strings
Gibson Les Paul Triumph Bass
Piano, Keyboards, Controllers, Marimba, Dusty Strings harp
Reply With Quote
  #25  
Old 09-05-2023, 10:51 AM
Doug Young's Avatar
Doug Young Doug Young is offline
Charter Member
 
Join Date: Apr 2005
Location: Mountain View, CA
Posts: 9,712
Default

Quote:
Originally Posted by kurth View Post
perhaps that's it cause I'm mixing down to a 16bit wave file for streaming master. Next song I'll try mixing down to uncompressed wave file and see if it sounds different. thanks
Aha, that explains it. You’re hearing reduced quality because you’re comparing the original to a reduced quality file. Exactly as expected. And 16 bits has less dynamic range than 24.
Reply With Quote
  #26  
Old 09-05-2023, 11:25 AM
KevWind's Avatar
KevWind KevWind is online now
Charter Member
 
Join Date: Apr 2008
Location: Edge of Wilderness Wyoming
Posts: 19,315
Default

Quote:
Originally Posted by kurth View Post
Kev ....'mixing down' is exporting in macworld. and yes it might be due to different file types. i'll do a test and see. thanks


I guess it was your use of the phrase " mix down a track" that threw me. I now assume now you were using "a track" to actually mean an entire multi -track mix or session/project which was a bit confusing to me .. Ah the pit falls of written communication

Yes give it try, export in the same exact format you record in, and bring it back into your project and invert the phase --- all else being equal, it should null.
__________________
Enjoy the Journey.... Kev...

KevWind at Soundcloud

KevWind at YouYube
https://www.youtube.com/playlist?lis...EZxkPKyieOTgRD

System :
Avid Carbon interface , PT Ultimate 2023.6 -Mid 2020 iMac 27" 3.8GHz 8-core i7 10th Gen ,,128GB 2666MHz DDR4 RAM,,2TB SSD storage,Radeon Pro 5700 XT16GB Ventura 13.2.1
Reply With Quote
  #27  
Old 09-05-2023, 11:36 AM
Doug Young's Avatar
Doug Young Doug Young is offline
Charter Member
 
Join Date: Apr 2005
Location: Mountain View, CA
Posts: 9,712
Default

Quote:
Originally Posted by KevWind View Post
Yes give it try, export in the same exact format you record in, and bring it back into your project and invert the phase --- all else being equal, it should null.

Maybe... The "all else being equal" part is where this gets hard, which is why I didn't suggest a null test to start with. It depends on the signal path in the DAW and what all is being done. For example, if in the process of "rendering", the master output fader is adjusted then the levels of the exported track won't be the same as the pre-master-fader tracks. If he has any plugins or effects on the master buss, then he'll be trying to null the processed exported track against the pre-effected raw tracks, and also be re-applying those master effects to the exported track. Lots of ways to go wrong here if you don't understand the signal flow, file formats, etc.

Hard to cover all the issues (as we see, not knowing 24 bit was being compared to 16) in brief text posts with incomplete information. I imagine this could all be easily cleared up in person in a few minutes.
Reply With Quote
  #28  
Old 09-05-2023, 01:23 PM
KevWind's Avatar
KevWind KevWind is online now
Charter Member
 
Join Date: Apr 2008
Location: Edge of Wilderness Wyoming
Posts: 19,315
Default

Quote:
Originally Posted by Doug Young View Post
Maybe... The "all else being equal" part is where this gets hard, which is why I didn't suggest a null test to start with. It depends on the signal path in the DAW and what all is being done. For example, if in the process of "rendering", the master output fader is adjusted then the levels of the exported track won't be the same as the pre-master-fader tracks. If he has any plugins or effects on the master buss, then he'll be trying to null the processed exported track against the pre-effected raw tracks, and also be re-applying those master effects to the exported track. Lots of ways to go wrong here if you don't understand the signal flow, file formats, etc.

Hard to cover all the issues (as we see, not knowing 24 bit was being compared to 16) in brief text posts with incomplete information. I imagine this could all be easily cleared up in person in a few minutes.
Good point , and agree there should be no FX on the Master Fader at the very least (also note I never adjust the output of the Master Fader) it alway stays at unity gain .. In fact when I did my test, I removed all plugins from all tracks
Because I was only interested in finding out if the process of rendering/exporting/bouncing itself changed the signal or not
__________________
Enjoy the Journey.... Kev...

KevWind at Soundcloud

KevWind at YouYube
https://www.youtube.com/playlist?lis...EZxkPKyieOTgRD

System :
Avid Carbon interface , PT Ultimate 2023.6 -Mid 2020 iMac 27" 3.8GHz 8-core i7 10th Gen ,,128GB 2666MHz DDR4 RAM,,2TB SSD storage,Radeon Pro 5700 XT16GB Ventura 13.2.1
Reply With Quote
Reply

  The Acoustic Guitar Forum > General Acoustic Guitar and Amplification Discussion > RECORD

Thread Tools





All times are GMT -6. The time now is 09:19 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, vBulletin Solutions Inc.
Copyright ©2000 - 2022, The Acoustic Guitar Forum
vB Ad Management by =RedTyger=