Music Production Insights

What Sample Rate and Bit Depth Should I Use?

Ilpo Karkkainen Avatar

Updated

I often get questions about what is the best sample rate and bit depth to use. So let me break it down in a simple way.

This article is updated in October 2021.

What Sample Rate Should I Use?

The sample rate and bit depth you should use depend on the application.

For most music applications, 44.1 kHz is the best sample rate to go for. 48 kHz is common when creating music or other audio for video. Higher sample rates can have advantages for professional music and audio production work, but many professionals work at 44.1 kHz. Using higher sample rates can have disadvantages and should only be considered in professional applications.

What Bit Depth Should I Use?

For consumer/end-user applications, a bit depth of 16 bits is perfectly fine. For professional use (recording, mixing, mastering or professional video editing) a bit depth of 24 bits is better. This ensures a better dynamic range (the difference between quiet and loud parts o the audio) and better precision when editing. A 32-bit floating point bit depth can have some advantages for professional applications, but the files take up 50% more space compared to 24-bit audio.

Sample Rate

44.1 kHz is the current playback standard for most consumer music applications. In video, 48 kHz is common.

The disadvantages of working at higher sample rates

Higher sample rates of 88.2 kHz, 96 kHz, and even 192 kHz are available in music and audio production software. Is there an advantage to working at these higher sample rates? Why not simply just use the maximum sample rate your setup allows?

  • When sample rates double, so do the file sizes on your drive.
  • It requires more processing power from your computer. The higher the sample rate, the higher the CPU cost.
  • Some plugins and audio tools can’t handle higher sample rates properly and could cause issues.

I am referring you to this article by Monty at Xiph.org. He explains this in great detail. The article approaches the question from the point of view of music downloads. But the theory behind digital audio is no different when talking about music production.

I also suggest you check this page on the Infinite Wave website. It allows you to inspect how well your DAW handles resampling from a higher sample rate (96 kHz) down to 44.1 kHz. You might be surprised by the poor performance of some DAWs!

For these reasons, my conclusion is that for most people and most applications, 44.1 kHz is the best sample rate to go for. It provides good fidelity, is safe to work with and is not too taxing on your system.

The advantages of working at higher sample rates


Going for higher sample rates can offer some advantages. This is provided you have the right tools and the experience to handle any conversions properly. Particularly, you may avoid fold-back aliasing issues (audible artifacts) when eventually converting higher sample rate material back to 44.1 kHz.

Here is a great article on the topic by Ryan Schwabe.

Most music professionals I know work at sample rates of 44.1 kHz or 48 kHz.

With my own music, I am currently working at 88 kHz. For mastering, I usually work at the sample rate of the submitted source files and convert to the destination sample rate (normally 44.1 kHz) at the end of the project. To ensure best fidelity, I am using iZotope RX to handle all conversions.

Different Sample Rates in a Single Project?

What if you are working on a project at a sample rate of 48 kHz and decide to drop in a sample that is at 44.1 kHz, or vice versa?

It depends on the DAW you are using and it’s settings. Many modern DAWs will be fine with this.

Ableton Live, for example, will automatically resample (convert) any imported audio to the project sample rate. The project sample rate can be set at “Preferences -> Audio -> Sample Rate”. It all happens behind the scenes and you won’t really notice anything happening. The same is also the case for most virtual instrument samplers – they will sort out any required conversions for you.

Some DAWs won’t do this automatically. In those cases, you will get audio playing back at wrong speeds. You should always be mindful of three things:

  • What sample rate you are working in.
  • What sample rates you are importing into the project.
  • How your specific DAW deals with different sample rates.

Generally it’s good practice to avoid resampling when you can as it can potentially degrade quality. But I wouldn’t worry if you sometimes have to do it in a creative setting. You don’t want things like that to get too much in the way of your creative flow.

Bit Depth

Now, the question about bit depth is more simple to answer.

24-bit audio gives you a theoretical dynamic range of 144 dB, as opposed to 96 dB with 16-bit audio. More dynamic range means three things:

  • Better signal-to-noise ratio.
  • Better precision when mixing.
  • Less worrying about headroom, as you don’t have to run your levels so hot.

32-bit floating point depth is even better, but the benefits there over 24-bit audio seem to be pretty much indifferent in most applications.

For consumer/end-user -type applications, a bit depth of 16 bits is fine. For anything more professional, 24-bit audio should be used. It’s good to note that all professional DAWs are using an internal bit depth of 32 or 64 bits these days.

What Are Your Thoughts?

Let me know where you stand on this. Drop a comment with your thoughts.


Posted

in

by

Comments

97 responses to “What Sample Rate and Bit Depth Should I Use?”

  1. Bit Avatar
    Bit

    “24 bit audio gives you a good dynamic range of 144 dB”

    Does this really matter in todays (overcompressed) EDM?

    1. Ilpo Karkkainen Avatar

      Good question! I would say yes – of course the dynamic range of the final mastered piece of music is nowhere near that. But it is important to have that dynamic range while working on the music, to have resolution in the mix and to be able to push things loud without bringing up the noise floor too much. This especially matters to those who record anything live – vocals etc.

    2. Shawn Avatar
      Shawn

      Where can i get an audio interface with 64 bit depth? I wanna melt some faces off wit some guitar solo’s?

      1. Ilpo Kärkkäinen Avatar
        Ilpo Kärkkäinen

        I’m afraid for now you just have to work on your technique ?

  2. Denis Druzhinin Avatar

    IMO the “higher is always better” principle is only true for live recordings 😉

    I usually work inside the box in 44100 and only switch to 96000 when I need to avoid aliasing when working with curtain plugins or when very high frequency modulation is used, like super fast LFOs, FM, ring modulation. And for extreme pitch shifting as well.

    There’s a couple of things about higher sample rates to keep in mind. First is online and offline rendering. Some plugins can apply advanced techniques during offline mixdown, like higher quality and more CPU intense algorithms, multi-pass or oversampling. For example filters in U-HE Zebra are tuned “by hand” for every major sample rate and the same patch recorded at 44100 and 96000 may sound quite different. Second is while working at 44100 we can always choose higher sample rate for offline project mixdown, without even changing audio driver settings. This can be dangerous though if used carelessly.

    1. Ilpo Karkkainen Avatar

      Thanks for the comment Denis. Very interesting. Never thought of working at 44100 and bouncing higher. Makes sense – definitely going to try this out.

  3. Jordan Avatar
    Jordan

    I made the decision to buy the 16 bit A/I over the 24 bit one for several reasons,

    Price was one of them. I’m still a small studio making NO money, working with NO clients yet, and couldnt justify the price for a box that does the same third, but with the addition of the number ‘8’.

    Another reason was seeing how many people, even with the 24 bit A/I, still had to put it in 16 bit to work properly.

    Another reason is because I know how to track proper levels, and have never ever ever run into any issues what so ever with 16 bit. Let me reiterate that, I HAVE NEVER EVER EVER EVER HAD ANY SINGLE CONCEIVABLE ISSUE WITH RECORDING 16 BIT. 96 DB of headroom isnt enough? What monstrous clipping, distortion application are you applying for NINETY-SIX DB to not be enough?

    Maybe if you need 24 bits to track, you need to work on your tracking.

    And of course, the age old tale of the Beatles recording everything in mono to 4 track machines, and making better music than any of us ever will.

    I think waiting for a 24 bit A/I is an excuse to not make music. No mix has ever been made or broken by the addition of 8 bits. IT’s simple inconsequential.

    1. Ilpo Karkkainen Avatar

      The question here is not wether you can make good music in 16 bits – surely you can – like you said music is about ideas and not equipment. But we don’t live in the 60’s – what’s ideal for electronic music production right now?

      I have never had a problem because of working with 24 bit audio, what kind of trouble are you referring to that your friends have been experiencing?

      Finally, I really don’t think the price is an issue for most people in electronic music production. Even many of the very cheapest end “semi-professional” audio interfaces come at 24/96 these days. I understand if you need an interface with tons of I/O, this could be different – however there’s no denying 24 bit format is the current industry standard in production and even at consumer level (while CD is 16 bit, DVD and Blu Ray do 24).

    2. Ace Avatar
      Ace

      I like to look at 24-bit being the equivalent to a 2″ reel that is Chrome/Gold plated which allowed Beatles and Stones and the rest… Hit the tape as hard as possible in order to capture all those many frequencies that are over-saturating the hell out of those reels…! That’s my personal outlook on the idea behind 24-bit allowing you to capture more of that information that is included in all those saturating frequencies!

  4. DrumEd Avatar
    DrumEd

    I can bounce at 32bit whereas my recording input can only reach 24bit. I was told that bouncing at 32bit gives you more headroom? I work at 44.1 and then bounce my final pre mastered wav at 48hz

  5. Rob Avatar
    Rob

    32bit floating point allows you to hit lower frequencies. For me it comes down to what your ear-holes tell you.

    1. Rob Avatar
      Rob

      also, big-ups Resound

    2. lambdoid Avatar
      lambdoid

      The bit depth only affects amplitude, not frequency(the x axis on a graph). A higher bit depth increases the dynamic range of the audio and increases the signal to noise ratio/lowers the noise floor. However, 32 bit floating point does not add any more dynamic range than 24 bit integer(since no DAC exists that can use the full dynamic range of 24 bit audio) and is used internally in DAWs to avoid word length truncation and to improve accuracy at lower amplitudes. The sampling rate only affects the range of frequencies that digital audio can represent(the y axis on a graph) and determines at what frequency aliasing occurs which is 1/2 the sampling rate(Nyquist-Shannon sampling theorem). Low frequencies are represented well unless you’re using a ridiculously low sampling rate(extremely unlikely unless you’re going for a vintage sound). It’s the high frequencies that are more problematic at a lower sampling rate.

  6. Lasse Avatar
    Lasse

    Hi,

    This is an interesting post and something I was looking into myself about a year ago. Some plugins may sound better at higher sample rates – others may do internal supersampling so you will get pretty much the same result with 44.1kHz. You can get some lower latencies with higher sample rates as well.

    What is important for sample rates is the quality of the conversion, if you are using higher sample rates. Take a look at this site and your DAW graph: http://src.infinitewave.ca/

    Back when Reason 6.5 was the latest version I was running it at 96kHz and I had some aliasing issues when exporting songs to a 44.1kHz audio file. If you take a look at Reason 6.5’s conversion graph from that site, it shows terrible aliasing that reaches into the audible band. So if you use high sample rates, make sure you are using quality converters when it’s time to bounce something back into 44.1kHz or 48kHz. Personally I only use 44.1kHz now.

    As for 24bit vs 16bit: 24bits gives more headroom for the working stage. 16bits dithered is well enough headroom for an end result (like a CD), where the gain staging, leveling and dynamics are already done. However, when editing it’s a whole lot easier to have more headroom (more bits) so not all of your recordings need to be recorded as hot as possible without clipping.

    1. Ilpo Karkkainen Avatar

      A very cool link, this is great – thank you!

      So basically, the less lines you see in the graph, the better, right?

      Reason 6.5 is definitely looking bright.

      Ableton 9 seems to have some bad stuff going on as well. Very interesting.

      1. Lasse Avatar
        Lasse

        Yes, the sweep test should produce a single clear line. It is all explained in the help section, which also has some other interesting information on how SRC works.

        1. Ilpo Karkkainen Avatar

          Yeah, checked the help and FAQ sections, good info. Looks like I’ve made a solid choice switching to Pro Tools for mixing. But then I again my ears told me that already. 😉 Thanks a lot Lasse.

      2. He Avatar
        He

        Well, I don’t understand… If the less lines the better, why do you say ableton live 9 is not doing well on this test? It has just one line, and a very defined one! Just like Izotope’s softwares…

        1. Lipaz Avatar
          Lipaz

          Ableton and Logic are beating “industry standard” ProTools 2018. LOL… Even Harrison Mixbus seems to be tighter whose already have version 5… Interesting.

          BTW 16-24bit and 44.1khz for ITB EDM mixing is enough. Just get the mix right and balanced. When working with organic sound recording 88,2khz is considerable (easier to resample to its half value). Or 48khz if computer resource is limited.

          Music is music, not just calculated bits.
          Daft Punk recorded worked on first album roughly in 12-16 bits, and no more than 44.1khz. Billions listened to them.

          Bear in mind humans do not like steril sound. We are not used to it. Just like UHD movies with high fps still seems to be weird (e.g. The Hobbits), vs a good old 24-25fps analog tape recorded movie. It depends on aging/generations as well.
          You may looking for the vintage sound you heard from SSL through broadcasting, but in 2060 your grandchild may be fine with higher quality and listen to some 90’s house music as we see now the old vinyls. Who knows.

          1. Mighty Kickx Avatar
            Mighty Kickx

            Can I record at 24 sample rate and mix at 96 or 196?

  7. Lasse Avatar
    Lasse

    So in short (my previous post), higher sample rates actually can be worse in some cases.

    By the way, in these discussions it is quite necessary to separate recording sample rate from the sample rate you are working with in a DAW with plugins and soft synths.

    For recording you could argue, that by the

    Nyquist–Shannon sampling theorem 44.1kHz is sufficient for all sources, since you can record frequencies up to 22.05kHz which goes well beyond the human hearing range. However, since A/D converters are not perfect, you might get some distortion in the 20 – 22k range, which is why I understand that the standard for digital film was chosen to be 48kHz to give some room to work around the cutoff point.

    Now if you record some live source at 96kHz, you are recording audio frequencies up to 48kHz, which are inaudible (as no one hears above 20kHz really), but may contain some audio information. If you convert this recording to 44.1kHz with a bad sample rate converter that has aliasing, you will bring all the inaudible supersonic information into the audible range. And in that case you would have gotten a better result recording straight at 44.1kHz, since it would not have picked up anything above 22.05kHz anyway.

    As for DAWs, mixing, plugins, soft synths I think you will find that many people perceive better quality with higher sample rates in online discussions on the topic. So try some higher sample rates and see if they are useful for your setup, but I wouldn’t worry at all if you’re stuck at 44.1kHz. Like said, it gives more CPU headroom and at least for my purposes the results have been exactly the same as with higher rates.

    1. Ilpo Karkkainen Avatar

      Thanks for the expert insights Lasse, much appreciated.

      1. Mighty Kickx Avatar
        Mighty Kickx

        Can I record at 24 sample rate and mix at 96 or 196?

    2. gmvoeth Avatar
      gmvoeth

      Be sure you low pass filter what you record to like 1/2 the sample rate or you may get folding of the frequencies recorded thus causing distortion. like 21KHz becomes 19KHz on up to on down. 42KHz becomes 1KHz if using too low a sample rate. Also the resolution of the reproduction becomes worse as the frequency goes up so like think 192KHz for everything if you really want quality.

  8. lambdoid Avatar
    lambdoid

    z3ta has an oversampling option. I use the 48khz sampling rate in Reaper and when I switch the oversampling on in z3ta, you can clearly hear the difference in the high frequencies. This is great for pads and other sounds with a lot of high frequency content, but basses often sound a bit more grungy at the DAW sampling rate or utilizing z3ta’s undersampling option. You can do this for both online and offline, especially if you have an older CPU that struggles with z3ta’s high demands.

    1. Ilpo Karkkainen Avatar

      Interesting! So that means z3ta can work independently for example at 96k when your project is at 48k?

      1. lambdoid Avatar
        lambdoid

        Yes. I think so, although at higher sampling rates it would make less difference.

      2. lambdoid Avatar
        lambdoid

        There is an audible difference when using the oversampling option. It’s not always desirable to use it though. For dirty basslines, it’s usually better to use 1x the sampling rate(or sometimes even 0.5x), but for high-pitched sounds like pads etc the oversampling helps improve clarity.

  9. Dj Pushups Avatar
    Dj Pushups

    Yet again a very nice read!

    Personally I’ve come to conclusion that a 44.1khz and 24bit is enough in my bedroomstudio. I haven’t really given it even that much thought as to why that is; some producer friend of mine once just told me that it will sound better when I bounce my audiotracks in 24bit. And I suppose it did since I kept with that.

    I suppose that when this subject comes to a more professional level (say, mixing and mastering services for example) the samplerates and bitdepths matter more and more. My analogy here is that when you have better equipment to perceive and manipulate sound the more the quality of the signal matters.

    Thank you for your time!

    Regards, Dj Pushups

    1. Ilpo Karkkainen Avatar

      I think there is some truth in that – for instance since I upgraded my monitoring I have definitely started understanding what the fuss is about some plugins that I quite didn’t get before. Hearing more nuances. But yeah – working with 24 bit audio as opposed to 16 is still beneficial regardless of whether you can perceive a difference or not, as mixing/mastering guys can do a better job with it later like you said.

  10. Andrew Avatar
    Andrew

    48khz for video

    1. Lance Blair Avatar

      Yes, I’m a voice over talent and I default to 48 kHz 24 bit for my video clients. That’s what everyone asks for. For e-learning or radio I record at 44 kHz 16 bit.

  11. Michael_Mann Avatar
    Michael_Mann

    96K is waste of CPU and disk space. 96K won’t make any difference whatsoever in terms of sound because:

    1) no one hears anything above 19K in an ordinary listening environment
    2) you still must be extremely careful with all frequencies above 12K
    3) you’re forced to downsample everything down to 44.1K sooner or later. Your one billion dollars worth converter won’t change the mathematical fact that downsampling creates always aliasing and that’s going to be the very last thing you want to hear in your finished mix
    4) decently coded plugins sound identical in 44.1K and 96K or 192K because of oversampling. There should not be difference in the sound!!!!
    5) automation won’t sound smoother in 96K either because everything you are going to hear in the end is aliasing caused by above-mentioned downsampling
    6) 44.1K won’t prevent you from getting 3-5ms latency if you have a good sound card and you know how to use it and you don’t forget to freeze tracks

    Keep your mix sparse enough and use send reverbs cunningly – that’s how you are going to get that sheen that 96K using people talk about.

    1. Ilpo Karkkainen Avatar

      Thanks for the comments Michael!

      I recently stumbled upon a great and very in-depth article on this topic:

      http://people.xiph.org/~xiphmont/demo/neil-young.html

      Highly recommended for anyone who is trying to get their head around this stuff.

      1. Michael Mann Avatar
        Michael Mann

        Thanx Ilpo for the great link!

        It’s funny that the article mentions Neil Young because I wrote my first comment with him in my mind. This 192K/24-bit nonsense is pretty much coming from a man who’s notorious of excessive concert volumes and has reportedly damaged his hearing to the extent that he’s basically incapable of follow an ordinary discussion. At the same time, this mass producer of the noise pollution is praised of his awareness of environmental issues by the media. And last but not least, Young’s music has nothing to do with any kind of high fidelity or exceptional musicality. So, go figure….

    2. Josh Avatar
      Josh

      If you’re mixing down a bunch of tracks to two tracks, left and right, the frequencies in those bunch of tracks have to compete for the limited space in the two tracks. If you record, mix down, and master at a high sample rate then put your final customer copy as an mp3 it will sound almost as good as if the customer copy was 24 but 192K. If you record, mix and master at a lower sample rate, each individual track will sound great but when mixed together you will be missing prices of the original tracks.

    3. Josh Avatar
      Josh

      Each sample contains bit depth information. The Nyquist theory explains why you only need 44.1 K for frequency content. 96 K samples per second of bit depth information will help with mixing, processing, and mastering. The final product will sound fine at 16 bit 44.1K.

      1. Josh Avatar
        Josh

        Some people seem to think that sample rate only relates to frequency. With digital audio the sample rate is the number of samples taken per second of an analog audio signal. Each sample includes frequency and sound level (decibel) information encoded in a 16 bit, 24 bit or 32 bit word length. Let’s do the math with a 16 bit 44.1KHz audio file. There are 8 bits in a byte. So there are two bytes in every 16 bit sample. There are 1000 bytes in a kilobyte and 1000 kilobytes in a megabyte. So there are 1,000,000 bytes in a megabyte. 16 bits or 2 bytes times 44,100 samples equals 88,200 bytes. That is how many bytes are in one second of a mono digital audio file at 16 bit 44.1KHz. Let’s make it stereo, 88,200 times 2 is 176,400 bytes. Now let’s make it one minute. 176,400 times 60 equals 10,584,000 bytes or 10.584 megabytes. Now there will be additional bytes added to the file so the computer knows what type of file it is.

  12. Eugene Eugene Avatar
    Eugene Eugene

    Wow, this article you gave a link to (and vids from it!) is pure gold. Thanks!

    1. Ilpo Karkkainen Avatar

      Isn’t it! One of those ones you want to bookmark and come back to every once in a while.

  13. Mav @ Scientific Avatar
    Mav @ Scientific

    very interesting. i was still using 16bit, 44.1khz as i was used to this since forever, but i will move up to 24bit, 48khz from now on. cheers ilpo!

    1. Ilpo Karkkainen Avatar

      Thanks for the comment Mies good to hear 🙂

  14. Jim Spratling Avatar
    Jim Spratling

    Probably a bit off topic but… what bit rate should I use for ripping vinyl? I am using an Artcessories USB Phono plus. I have an iMac with Ableton running. I have encounters some strange glitches when running at 48 or 44.1 kHz. I have increased the sample rate and now have less glitches. My question is… Am i doing the this the correct way and am I loosing any quality by increasing the sample rate?

    1. Ilpo Karkkainen Avatar

      That is a great question!

      The glitches of course should not be happening in any sample rate, so that is something you may want to look into (unfortunately I am not very qualified to help there). You could possibly post about the glitch issue on a forum like Gearslutz (http://www.gearslutz.com/board/) and find help there.

      One thing you should try though, is increasing the buffer size inside Ableton. For ripping vinyl you can max it out. This might help getting rid of glitches.

      Live -> Preferences -> Audio -> Buffer Size

      To answer your actual question, I would personally use 48 kHz and 24 bit format for ripping vinyl. That offers plenty of resolution for capturing everything. Also make sure you are recording loud enough (watch for clipping though, but you probably knew that).

      You won’t lose any quality if you want to record (rip) in higher sample rate. But some slight deterioration might happen if you later convert that high sample rate file into lower sample rate (CD compatible 44.1 / 16 bit for example). The quality of the conversion depends entirely on the converter, some are better than others.

      So to recap, 48/24 is good enough, but if you want/need to record in higher bitrate, you won’t lose any quality – as long as you make sure you use a quality converter if you later on want to convert the sample rate.

      Does that make sense?

      1. Jim Spratling Avatar
        Jim Spratling

        That is very helpful, thank you Sir!

  15. Noodlez Avatar
    Noodlez

    Just came across this website about working ITB at higher sampling rates – http://varietyofsound.wordpress.com/2012/11/02/working-itb-at-higher-sampling-rates/

  16. robertrobin10 . Avatar
    robertrobin10 .

    24 bit will sound better than 16 bit, remember its 16 bits only for a sound at 0db, if your peaks are at -6 you have already lost 1 bit of resolution and the average of your music maybe 20 below that!!And dont forget thats only for the bass and mids, treble has little energy because its only harmonics so they maybe another 20db below!!So your 16 bit playback is only sounding as a good as a 10 bit system!!You should be playing back at a 22bit rate to get the sonic benefits of a 16 bit sound.I believe this is the only reason why digital sounds fatiguing. As far sampling rate, the higher the better,recorders have sharp filters to reduce alaising distortion which can cause sonic problems,the higher the sampling rate the higher or gentler the filters can be.

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      Only saw this comment now. But this is nonsense. Easy to confirm in a blind test.

  17. Ajay Avatar
    Ajay

    Hi can someone please explain to me how sample rates and bit depths could impact on mastering?

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      Your question is a bit vague but what I can say is that in a mastering scenario it’s usually good practice to avoid unnecessary conversions.

  18. stephenpyx Avatar
    stephenpyx

    It is my understanding that bit rate is determined by the media you are playing it on. for example if you are making a cd 441 is best because of the way cd players process the data. If you are doing it for film then 481 is the choice. Gaming systems have their own bit sample rate but I don’t remember it offhand.

  19. gmvoeth Avatar
    gmvoeth

    If things were properly made you could use any sample rate or bit depth you want to use and not just what industry dictates you use. BTW that 441 or 48 or 192 is not really what it says it is. What’s the ambient temperature ?? it will change a bit that rate one way or another.

  20. gmvoeth Avatar
    gmvoeth

    Have you ever listened to firefighters talk over the radio using those throat mikes they sound distorted like they are using a bad sample rate combination.

  21. Kicking Saturday Avatar

    Let me say i have been recording film and studio from 1990-2007
    I will make my point short I have spoken to a PH-D. who broke down real facts.
    this could be a major warning to myself forevermore why well the truth of the recording facts of today not remove yourself from all the chats and notes from everyone and yes we are going to have death in the digitization of Apple and Windows let me break everything down to your un-contained mind of recording to while your recording facts you must be self-contained so instead of just telling you the best way i will say ask yourself self then look up the real facts of your computer-literate minds yes we lie to ourselves every day we are Chinese fake micro micro lines capture you are not Tracking audio unless you are on Tape
    tape will never lie to you but there is no cost-effective why with tape
    Real facts about your computer
    Ask yourself does a computer have 24-ch recording head block for audio?
    Ask yourself why does Rupert Neve say music needs Voltage?
    Ask yourself about sd camera you know about bad sound you get inside your sd handy cam? so why buy a SD audio recorder 4,Kto 50K VTR tapeless? Ok Sony
    Ask yourself what is a intel Cpu incoder ,Is he or she spoken to RCA Victor, AMS Neve, Rupert Neve ,Solid State Logic and and do they have a have a high iq on mixing console construction have they built-in a stable standalone computer do they offer you a stable 50 year recorder tracking audio computer.?
    do you think this is fair to you on your fake micro capture sd chip micro podcaster apple , windows
    Ask yourself knowledge of audio history?
    Ask your do you trust NSA with your China Dsp chip inside your China computer.? Why do you give Chinaware a world-class approved audio recorder tracking lie? your bank cash?
    Ask yourself are you happy with your missing micro capture lie digitization sound?
    Ask yourself is your computer safe does it make you millions of dollars
    Ask yourself is major studios closing is major record lables on a shutdown .
    No self-contained high-end self-contained Multi-Track Recorder Player Over Dubbing Recorder Playback 48 ch.

  22. ynys lochtyn Avatar
    ynys lochtyn

    OK so if u are using DAW and want to avoid lags (latency) Will the lesser requirements of 16 bit recording/playback prove speedier than more demanding 24 bit?

  23. Loïc Kuantum Avatar
    Loïc Kuantum

    Here a couple things I know about sample rate :

    Working on a 44,1kHz (or 44,8kHz) is a good option for mixing and music producing. Indeed, pushing your sample rate up to 88 or 96kHz won’t make a difference. (at least, not an audible one)

    UNLESS you are doing Sound Design.

    Why ?

    When you record a sample at 44,1, the sine wave at 20kHz will be discretized (sampled) as a Triangle shape. Just because you don’t have enough point at that frequency to properly “draw” a real sine. (you can try that in your DAW by recording a tone generator at the frequency of 20kHz with 44,1kHz setting)
    Of course, a triangle wave will have a more harmonics than a sine (remember that the sine is the “root” of any sound, you can create any sound by just adding sine at different frequency)
    So, that mean when you are going to pitch down this 20kHz sine-actually-triangle-shaped wave, you’ll hear something really different different than what it should be.

    Actually, you will listen to a root sine with other sine added on top.

    That what we actually call distortion 🙂

    In fact, it will gave you the sound of old school production.

    For exemple, when people used akai s1000 sampler, the would pitch up their
    samples in order to stock more in this tiny memory (back in the time it
    was huge !).
    But while producing music, they would have to pitch down thoses sounds, and would automatically create Distortion/artefact (pick what you want, I thing the use of those words just depends on how your sound sounds at the end)

    That why when you are planning to manipulate a sound, and especially pitching it down, using 96kHZ material is the best way to avoid those artefact.

    I hope that will be helpfull for you.

    (And sorry for any mistakes, english is not my mother tongue)

    1. kwiz Avatar
      kwiz

      Wow, that is good stuff right there! I’ve been producing music for the better part of two decades —depressing, noisy stuff— and I’ve never been able to pin down why it is whenever I pitch audio down I can’t get a clear sample quality. I don’t think I ever attempted to figure it out, but this comment helps so much! Thank you sir and kudos, keep spreading that good knowledge around the Internet ????

  24. Ads Avatar

    I’m actually writing an essay on this very subject, if anyone, pro or otherwise would mind taking 2 mins to fill in my survey that would be awesome. I’ll let you know the results of the survey and listening tests.

    https://www.surveymonkey.co.uk/r/PSMKZ9X

  25. Ali Avatar

    That’s great article.

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      Cheers Ali!

  26. Max Avatar
    Max

    Exxelent and clear article .

    But i wonder.

    What happens if i work on a project that’s set to 44.1khz and in a tempo of 130 bpm( i use ableton Live) and i decide to pull in a drumloop that’s also 130 bpm , but it was sampled at 48kz. Will my DAW automatically resample the loop to 44.1 , or will it play it slightly slower and thus out of tempo , and vice versa if my project is 48khz and my imported sample is 44.1 khz.

    And many thanks for inspiring articles and post , i follow you on Insta , Facebook and read you blog.

    Cheers from Sweden / Max

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      Great question. It depends on the DAW and it’s settings. In the case of Ableton Live, it will automatically resample to project sample rate. The project sample rate can be set at Preferences -> Audio -> Sample Rate. It also pays off to select the high quality conversion in there. So with Live, it all happens behind the scenes and you won’t really even notice anything.

      Some other DAWs won’t automatically resample so in those cases you will get audio playing back at wrong speeds. So you should always be mindful of what sample rate you are working in and what sample rates you are importing into the project.

      Generally it’s good practice to avoid resampling when you can as it can potentially degrade quality. But I wouldn’t worry about it too much – you don’t want things like these to get too much in the way of your creative flow.

    2. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      I have added a section in the blog post to answer your question, as I think it is something a lot of people are wondering about. Thanks!

  27. Shevlin Avatar
    Shevlin

    I would like to know if it’s necessary to buy a DAC that handle the project’s settings or can I run a project at a higher sample/bit rate without any problems using the same DAC. (I am not talking about recording. Just working with midi or audio samples in a DAW).

    Can’t found an answer anywhere so a reply would be very helpful.

  28. Jean Clothem Avatar
    Jean Clothem

    Wrong! The audio on YouTube is :

    codec 1 : AAC 44100 (medium quality)
    codec 1 : Opus 48000 (High quality) (near full band stereo)

  29. John Lenington Avatar
    John Lenington

    Great article! How much of a role does your audio interface play in the quality of your sample rate conversion? For what it’s worth, I am using Ableton Live 10 and use a RME Baby Face Pro. I a make Melodic Progressive House so I am usually not recording anything except when I record a the output of a MIDI track to an audio track for some specific purpose. So everything is straight up ITB. I have been debating about what sample rate to use. On my old PC I used 96 kHz . I can hear a difference for sure with the high frequencies ( compared to 44.1 kHz) but, I was reading an article by Ian Shepard who is a well known mastering engineer who said he can hear the difference between 44.1 and 48 but not really any difference between 48 and 96 kHz. So I may try 48 and see how that goes. But I am curious as to how much the difference the audio interface makes vs the Daw itself. Thank you.

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      Hey John and apologies for late reply.
      In a normal ITB setup, the audio interface is not doing sample rate conversion. It’s the software that you are using (your DAW in most cases) that handles conversions.

      The audio interface then does digital-to-analog (DA) conversion so that the signal can be sent to your speakers. There are fairly large differences between interfaces in the quality of this DA conversion. I have not used RME myself but it should be quite good!

  30. Josh Lillie Avatar
    Josh Lillie

    Why the heck would you need a sample rate as high as 192 KHz per second in digital audio?

    The Nyquist–Shannon theory explains why we only need a 44.1 sample rate for frequency. (B < fs / 2). B = Bandwidth, fs = sample rate, So the sample rate divided by 2 needs to be greater than the target bandwidth of frequencies (in this case 20Hz to 22 KHz) that you want to accurately encode and reproduce. Back in the 1930s when Nyquist was working for Bell Labs I’m sure he wasn’t too concerned with dynamic range, only frequency accuracy. Have you ever been on a phone and you can’t hear the person on the other end of the line so you ask them to speak up and they start yelling and you still can’t hear them?

    In digital audio the sample rate is the number of samples taken per second of an analog audio signal. Each sample records amplitude values which indirectly gives you frequency information via the sampling rate and by graphing the curve and connecting the samples amplitude values using sinc mathematical functions or something like that (break out the graphing calculator). It also directly gives you volume (Dbv) information encoded in a 8 bit, 16 bit, or 24 bit word length. This gives you a dynamic range of Dbv values. The more bits the better the dynamic range and the closer together the possible decibel values are. The actual decibel values will have to be rounded up or down to the nearest value on the scale. An analogy would be if you were using a ruler and measuring everything to the nearest inch. This part isn’t as accurate as analog but it doesn’t affect the frequency. If you sped up or slowed down the sample rate that would affect the frequency. It’s kind of like a vinyl record in that regard. If you play it at the wrong speed the frequencies are wrong but the volume and dynamics are the same.

    By raising the sample rate you get more Dbv values per second. This could help in mix down when you’re combining a bunch of tracks into 2 tracks. It could help in processing dynamics plugins during mixing and mastering. This might be why the industry is going to higher sample rates like 192 KHz. For more accuracy (closer to analog) in the sound level (decibel) department.

  31. Skip Avatar
    Skip

    CAVEAT: I’m a beginner to music production and a solo game developer..
    Good article that helped clarify the terms for me. However, as a beginner, I am puzzled by your use of the term “professional music”. If I compose a song and incorporate it into a game which is sold, is that “professional”? Same as if I compose a song for a game and post it to say the Unity Asset Store for sale, is that “professional”? Or is your use of “professional” along the lines of a major music production for a well-known artist? From all that I have read in my learning, the recommendations have been for 44.1 kHz and 16 bit for use in games. Primarily due to the reasons you mentioned about file size. But as a beginner, when I am reading and learning, and more experienced people throw out the term “professional” is it because the intended audience is not someone like me who is just starting out?

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      Hey Skip,

      With professional music and audio work, I simply refer to anyone who either works with or aspires to work with audio professionally.

      From what I gather, you may be getting two things mixed up here (and I should have probably been more clear about it):

      1) The sample rate and bit depth used when you are working on the project.
      2) The sample rate and bit depth of the finished end product (ie. music track in a computer game).

      For example, you are correct that in most games the end product is at 44.1 kHz and 16 bits. But it’s likely the producer has used a higher sample rate and bit depth during production (most DAWs these days work internally at 32 or 64 bits), and the finished product is then converted into the final sample rate and bit depth at last stage (mastering).

  32. Nick Cent Avatar
    Nick Cent

    Bouncing the Virus TI into audio (online render – realtime that is) and listening through Audeze iSine10s. At 48Khz and 16 bit, the stem sounded flat and sterile – lifeless. The same for the 24 bit stem – no life, as in, the sound didn’t move or appear to have the same dynamics as the source material. The 32 bit stem captured it. The 64 bit file did even better and at this point I can’t believe the fact I can hear the difference. Then, Reaper offers some 2 or 3 other formats which sound like the 64 bit file. So, now that I can choose in how many bits I want to bounce each take, I will choose 32 bit for starters and always A/B with the source. This will also mean, that it is better to work at high sample rates and bits when you design your sounds and make your sample library. Funny, how I do all these and at the end I batch process all files to convert them to 16bits and transfer them to my S6000 akai sampler. 😀 The library is the library though. The akai is a crunchy playback of the library. Better to use samples and outboard instruments, effects, gear so that you can work at 44k 32bit and have no latency so that you can always play midi in and/or automation without latency too. I’m currently trying to separate these processes of sound design // midi perfomance // arrangement. Mixing and mastering are the 2 processes following.
    Before you start making a track, create it’s folder. Inside the folder you will throw all the samples you feel like you want to use (I use Sononym as well for this process) Snapper used to be cool too. Then, you can see what files you have to deal with, their attributes and such. The rest you should know already. Make samples for this project and render them in this folder. I remember when I was 14 and I was making music with HipHop ejay… I could form an arrangement in an evening with no experience at all, just because I had samples and parts ready to be placed on the grid. This is what you want to do. Drag an drop blocks of audio, your audio. But, adding tracks of audio and/or plugins (vst or vstis) will bring about latency. Thus, better transfer the samples out to a sampler or two, better have some outboard reverb and that’s when you can do what you want to do. The DAW is there to play back midi and record audio at high sample rates and bit data. Still, is not as easy… My many cents. 🙂

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      Thanks Nick for interesting observations!

  33. Alex Avatar
    Alex

    One detail that has been overlooked is that higher sample rates allow for more high-quality pitch and time processing – at 192 kHz, you could play the material back at 0.25x speed and still retain the high end of your audio signal. This really comes in handy for sound designers when creating sound effects, and I speak from my own experience as well. 😉

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      Thanks Alex, I had not thought about that!

  34. ober Avatar
    ober

    My experience is that high bit and sampling rates for end user listening lets in too much noise and dirt that gives me a sickening headache and nausea. Higher rates also fail the “cat test.” The cat test is when my cats run out of the room from all the high pitched dirt in the music. My hearing tests to 19,000 Hz, a cat can hear from 55 hz to 79 Khz way higher than us puny humans. That’s why cats and dogs hate vacuum cleaners all the high pitched noise.

    PS: A big source of noise comes from power cables in your system, under floors and in walls. So keep power cables and all other wires wrapped in a Faraday Cage. Aluminum foil layers or aluminum window screen works great. Cheers!

  35. Susi Avatar
    Susi

    You still havent said a thing why its better exactly. Why!?

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      There is no single right or wrong answer. It depends on what you are looking to do. If you can tell me what your use case is, I can give you my recommendation.

  36. ~KiT~ Avatar
    ~KiT~

    Considering that humans perceive up to about 20Khz, when doubled that frequency is (44.1Khz)and that should be enough to compensate for the digital interpretation of a given sound or resolution error as opposed to actual analog and true sound, according to Nyquist Shannon Kotelnikov theorem, where the frequency must be at least doubled when attempting to reproduce the sound digitally….
    44.1Khz at 32 bit depth equals to 88.2Khz at 16 bit depth in sound resolution “kbps”

    I do not see any reason to use higher frequencies with lower resolution in order to reproduce audible sound frequencies, approximately ~ 20Hz to 20Khz!

    What I am saying is: it is better to have more resolution in a given time period than to administer information at a higher rate but lower resolution to the listener…

  37. Any Avatar
    Any

    I am doing research work i have to collect audios and then make segments of single talk double noise etc each should sampled at 16000hz sample size 1024 i want to know sample size means file size?

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      I do not understand this sentence. Please clarify? 🙂

  38. Unin Filtrado Avatar
    Unin Filtrado

    Hi, I just a casual music listener. I use OGG. in my program -as in all others- I have the option to convert to many options of Hz.
    I have been googling what is the best option, and all sites say humans can hear up to around 22 KHz, and in the same breath they talk about encoding in 44 or 48 K.
    Would somebody be kind enough to tell me why I should not rip to 22k or at least 32K?

    Much appreciated.

    Have a good one!

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      Great question. A simple answer: The upper frequency limit of our hearing and the sample rate of an audio file are two different things, even though both are expressed in Hz.

      The sample rate in audio determines how many times within 1 second the signal is being measured. A sample rate of 22 kHz will generate 22000 measuring points within a second, 44.1 kHz creates 44100 points, and so on.

      Try recording a piece of music in 22 kHz and 44.1 kHz. I’m sure you will hear the difference when playing back the files. The 22 kHz one will sound noticeably worse.

  39. esteban Avatar
    esteban

    I have been told, the reason it works at 44.1khz 16bits and 48khz 24bits respectively, aside from industry standardization, is because they are multiples. which means working in 44.1khz 24bis or 48khz 16bits may cause inconveniences when exporting according to the standard of the final destination format. 44.1, 88.2 khz / 16, 32 bits —- 48, 96 khz / 24, 48 bits would be the way.. which make sense mathematically speaking, but in practice, does this really make sense? I understand the alyasing problems which may happen by working with different sample rates, but would the audio be compromised in some way also, by working at non multiple relations like 44,100/24, and then lower it to 16? regardless of the daw we work on ..

    1. Ilpo Kärkkäinen Avatar
      Ilpo Kärkkäinen

      That does not make sense to me.

      Most DAWs work at an internal bit depth of 32, for instance, regardless of the sample rate you have chosen.

      So yes it’s perfectly ok and common to work in 44,100/24 and then convert down to 16 for the final release. Just make sure you are using good quality software for the conversion, and always use dither when going down in bit depth.

  40. […] What Sample Rate and Bit Depth Should I Use? – RESOUNDSOUND […]

  41. Edwin Dhondt Avatar
    Edwin Dhondt

    Hi,

    Newbie.

    Note that in the below described case I’m not recording from external devices (eg guitar, piano, voice), I’m only using midi tracks with virtual instruments (built-in Ableton and also commercial and free plugins) and Ableton audio samples.

    I played back my scenes in Ableton’s session view, recording them into Arrangement View.

    After recording I switched to Arrangement view to further work on the recorded song.

    Can I find out with which bit depth (Record preferences) the recording was done ?
    Reason for asking is that Ableton preferences are global and not project specific. So after I did the recording, when working on another project, it might be that I changed the bit depth value in the Record preferences and now I don’t know anymore with which bit depth the recording was done.

    Related questions.
    If I want to be sure that my recording for this project was done with eg 24 bit, do I have to record my “song” again ?

    If I would change the bit depth after recording, it wouldn’t effect the recording in Arrangement view isn’t it, or will it ?

    Is the bit depth also relevant for midi tracks with virtual instruments (plugins), or is it only relevant for audio tracks ?

    Kr

  42. Amir Hossein Avatar

    i think 44.1khz best for Dialogue, because higher Sample rate More noise is also recorded.

  43. Peter Avatar

    For professionals 44,1 – 96 kHz 24 bits. For the listeners 44,1 kHz 16 bits.

  44. […] But none of this worked. I took a few days off to myself, and then found a little tick box in the project settings. One for the project sample rate. Checking it established the project sample rate at 44100Hz. Then I went around and changed every setting I could to 44100- my microphone, old recordings, anywhere I could find. I believe this was part of the solution and I suggest you make sure all your sample rates match as well. Here’s an article if you’re unfamiliar. […]

  45. […] you’re examining for the best sound rate, you want a particular speaker that can recreate the sound. You don’t want one that distorts […]

  46. […] least twice that limit for better audio quality. 48 kHz is another standard audio sample rate22,23,24. The higher sample rate technically leads to more measurements per second and a closer […]

  47. […] the ideal sample rate for your audio recording project is not a one-size-fits-all decision. It’s a nuanced process […]

  48. […] podcasts, 16-bit depth is typically sufficient to capture speech audio 2. The human voice has a lower dynamic range compared to music, so the extra detail from 24-bit is […]

  49. […] settings like 96 kHz 24-bit provide better quality but also use more processing resources. What Sample Rate and Bit Depth Should I Use? recommends 44.1 kHz 16-bit for most music […]

  50. […] practicality and longevity. It supports a high-quality A/D converter with a 24-bit, up to 192kHz sampling rate, producing superior […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.