So (assuming) you have managed to follow the instructions of our last article and have successfully got a reading of your room. Now how to interpret it? At first glance it seems super confusing there is a lot of information and how does it all relate to each other. In this article I wanted to talk you through just some of the main functions of REW, the graphs and generally what they mean. In this basic introduction we hope to get you started but in reality, when an experienced acoustics advisor is using this information, they will be looking at a lot of different factors to help interpret what the graphs are actually saying. They will also:
- Cross reference the graphs. How they inter react with each other tells us a lot about how the sound is interacting within the room.
- Look at the dimensions of the room and make room calculations based on those dimensions, possibly in multiple points of the room, if there are different areas or heights.
- Consider the frequency range of the speakers and whether any subs are being used. Also consider what we know about how different monitors react to boundaries.
- Consider the materials, thickness and shape of the room and how that will affect the way that sound works in that room.
- All of the environmental, external and life pressure restrictions that will affect how we interpret the sound and how the sound reacts to the physical dimensions it enters.
Read the other parts here:
Guide to graphs we are going to study
Hopefully you should have a screen similar to the below. It will be automatically set to the
First tab SPL & Phase. In this article we going to be looking at this tab (and the corresponding All SPL tab and Impulse Graph tab. In Part 2 will then go on to look at the RT60 tab and the waterfall and Spectogram tabs (not shown here) and some common issues you may come across.
The SPL (Sound Pressure Level) Graph.
Our first graph is one that I sure am a good number of you are already familiar with and will have seen this in some sort of context, if for no other reason than produced by manufacturers to show how their monitors perform.
But what exactly is Sound Pressure, and what are we measuring? Sound pressure itself was defined as the difference between the actual pressure of a sound wave and the average pressure of a sound wave. What this is often demonstrated as is as decibels, literally how loud each sound wave is, or put another way, the intensity of each sound level but as the intensity is very difficult to actually measure then it is the sound pressure which we record via sound pressure levels (or your REW Software in this case). Sound pressure is approximately equal to sound intensity. Decibels is the scale we use to measure this pressure (or intensity), the more pressure, the louder it is and higher up the Db scale it will be. A decibel is basically one tenth of a bel and this scale works closely to how we hear and interpret the loudness of sounds.
So, let’s take a look at a graph.
- X Axis is the sound pressure levels measured in decibels
- Y Axis is the sound frequency that is being measured in Hertz.
- Settings – Go to graph and choose 1/24 smoothing
Hopefully you will have measured and captured your test at normal listening levels which would be about 75hz. Your normal listening level should not exceed 85hz for prolonged periods of time as this will cause irreversible damage to your hearing in the long term.
The first aspect to note is where the results will begin. It will depend on your monitors, but if you are using some small bookshelf monitors the readings may not really kick in until about 50hz, it will depend on the speakers being used. In our example above it is clear these speakers work from around 28hz.
How To interpret the results
- The line on the graph is tracing the sound level of each frequency as the sweep was played and the results recorded in your room. For example, 50hz is 84.2 dbs whereas 70hz is 68 dbs. Each frequency in the scale that your monitor produces sound has been recorded this way.
- This means that note G1 (50hz) is 16 dbs louder than note C#2 (70hz). Imagine how that would sound if you were mixing a track, you literally would hear G1 super loud and C#2 not at all.
What it is it we are looking for?
- In an ideal world the difference between any of the frequencies would not exceed 10db . If it does, these are the frequencies that we are concentrating on providing additional treatment for. However, in the real word almost the best you will see is a 6db difference. I have only seen maybe 3 better professional studios than that in my entire career, so you do have to be realistic with what we are aiming for.
For the graph above what key areas would give me cause for concern?
- The entire low end would need cleaning up. The movement from 50hz to 180hz has fluctuation of plus / minus 25db. The mids 643 to 700hz has over a 10db fluctuation and the highs certainly could do with some taming.
- Our graph shows an untreated room, but you can keep going with this process for various different reasons, to find the best speaker position, once treated to fine tune etc.
The ALL SPL Graph
It is worth briefly mentioning this graph, as basically it is a comparison tool.
Every test that you conduct and save to the same REW file can then be selected and compared against each other. If you are trying to find the best speaker position for example this is great information. There are many different situations where this would be useful –
- If you are trying to isolate a certain issue then, taking a reading of the left monitor, then the right monitor and then stereo can tell us a lot more about the room and where that issue is being generated from
- To compare the effectiveness of different acoustic treatments in different positions
- To find the sweet spot in the room
- To find the best monitor position.
In our sample graph below, we are comparing our original mix position reading with a reading once some Alpha panels had been placed in the front corners. Look at the difference that they have made in the low end. At 50hz it has reduced the decibels by 6.5 dbs and at 70hz it has raised it by 6.3dbs. This means our previous 16dbs has now been reduced to only about 2dbs!
The Impulse Graph
The Impulse graph is our first example of the large capabilities of the Room EQ Wizard software and what it is able to record and produce but in the context of this article and for the purposes of a means as an introduction to the software we are going to concentrate purely on the Impulse Response Envelope or in simple terms the reflections within the room.
What is an impulse response graph? It is a graph that plots what happens if you played a single very loud noise in a room. What we are interested in is what happens to that sound once it is released into the room and what it then says about how that room is affecting the way we hear music. The original impulse response tests were (h) actually generated to plot phase inaccuracy (and a lot of what REW does now enables you still to do this) particulary in the design and manufacturing of loud speakers. In our context we are using it to plot the acoustic characteristics of our room.
To understand this best, I have prepared the two following graphs. Graph 1 is in a fully treated room.
Graph 2 is in a completely untreated room
So First Let’s understand the graphs.
- X Axis is the sound pressure levels measured in decibels; in this case we have recorded the drop of sound level from the initial sound played into the room.
- Y Axis is the length of time in milliseconds that the microphone recorded the reflected response.
What are we expecting this graph to tell us?
As the graph is recording the way that the sound has reacted within the room at the point it meets our microphone this graph is telling us about the reflections that it receives. These are most recognisable as echoes in a room. A sound has been released into the room, it has hit every surface it comes back into contact with, bounces around the room and eventually comes back to our microphone. How quickly and at what loudness (pressure) that reflection is received makes a difference to how we as humans comprehend what we are hearing.
Any sound that is received by the microphone, or in reality our ears that is over -20dbs and received after 20ms we will comprehend as an echo. (I have heard different values here, some say 30ms) We can hear it, we know it is there and we can deal with it by treating our flat, parallel surfaces but what about the sound that reaches our ears in that window that has dropped by less than 20dbs and reached our ears within 20ms’? This is the problem and the curse of the mixing engineer. Our brains cannot compute this as a separate sound from the source and therefore it picks up mixed signals, it may think it is hearing the left speaker when it is hearing the right and it can create a situation known as smearing, where the reflected sound reaches our ears so quickly after the original source sound that instead of a crisp start and finish to the sound or frequency you get a smeared confused sound instead.
How do we interpret the graphs?
- Every line on the graph is reflected sound. I have drawn a line across the graph at -20db’s. On the second untreated room you can see that there are many, many lines above this. That is all the reflections bouncing around the room.
- In Graph 1 where most of our reflection points have already been treated you can see that there are very few lines above this threshold and in general the graph reduces as many reflections have already been dealt with and it is just the few bouncing around the room we are seeing.
Graph 1 does have some troublesome lines. What are they? What is great about sound is that it is linear which means of course we can do a calculation to work out, what distance the reflected sound has travelled before our mic has picked it up. If like in example 1 the lines are very close, it is often something super close to the source and the mic, the biggest culprit here is the desk which is difficult to treat and why we would always recommend using the smallest desk possible to be able to achieve your tasks. It can be other problems, I once had a situation, in a fully treated room, with a nasty 150Hz problem that I could simply not pinpoint. It wasn’t until I used this graph and some calculations and then eventually received a photograph of the room that I was able to identify the problem had been a very high-backed chair next to the microphone all along!
How to calculate where the reflection may have been generated-
- Work out what the ms of your line you want to calculate is at (grammar?). Let’s say it is 2.5ms.
- Sound travels at a speed of 343 metres a second
- Or 343mm a millisecond
- Times by 2.5 and that sound would have travelled 86 cm’s
- Then it is a matter of working out what could have travelled 86cm’s from the source to a reflection to our microphone
In Part 2 we are going concentrate on the graphs which look at the decay time of sound and what that means to us and our room.