Why DSP?

Quote:
I find that sound processing such as equalizers warps the music beyond what you want to hear. Neutron’s sound processing is pointless if you want your sound to be as true to the original recording as possible. If you want truly better sound then invest in good transducers, lossless files, and a better amp/dac.

————
There are aberrations in current music recording and reproduction technology that simply cannot be fixed with “good transducers, lossless files, and better amp/dacs”.

For a start, 99% of music are mastered for loudspeakers not headphones.
On the other hand, it’s quite impossible to master all that well for headphones, because headphone listening is a moving target, with wild variations in the results among different headphones and different listeners.

All the processing I do is about transcending the physical limits imposed by my already quite excellent audio hardware, and my personal head/ear physiology (which although not aberrantly extraordinary, have enough normal variation from the norm to make generic mass market headphone tuning a hit-or-miss affair–as usual).

While we’re at it, there is no “the sound as the artist / producer / sound engineer / whatever intended” in any commercial recording played back through any stock home system, least of all an audiophile headphone rig. The ones producing the music know that their target audience / listening devices is a wild mishmash from Earpods and car radios to Martin Logan towers (or what-have-you) in billion-dollar listening rooms. The produced recording is full of compromises, designed to offend as little as possible in all setups, rather than to amaze anybody on a particular setup. Audiophiles don’t even take the studio listening setup on which the music was mastered to be the goal for their audio systems, and they may even be right, because the studio setup may have been designed to reveal as many potential flaws in the recording as possible rather than to present the music as best it can.

But that means that NOBODY has ever heard the produced music at its best even in the studio, and it is up to YOU to find / create the setup that pleases you the most!

It’s high time audiophiles took responsibility for creating the ultimate sound into their own hands instead of relying spoon-feed from audio companies.

 

Posted in DSP, Head-Fi Posts | Tagged , , , | Leave a comment

Audio reinvented–what improvements need to be made to the audio industry

1. Audio mastering needs to be improved, but for this to happen it needs a steady target to aim for (rather than having to cater to everything from mono boomboxes to car stereos to audiophile systems in one recording).

2. Accordingly, a new audiophile music standard needs to be put forward that segregates the responsibilities of audio mastering and audio playback correctly; for a start dynamic compression needs to be specified as a standard playback parameter that can be switched on and adjusted on the playback end to cater to different playback equipment capabilities and listening environments. Equalization and room correction capabilities need to become standard so that mastering engineers can simply aim for the best sound in the studio environment (which should also be standardized), while the wildly varying end-user listening setups can intelligently do their best to match the studio sound, rather than the other way around.

3. A 2nd version of all albums, mastered for binaural (headphone listening) ought to become standard. (I’m sure all head-fiers can get behind that!) For old albums mastered for stereo only, headphone listening systems ought to be updated with speaker system virtualization software that goes beyond the presently common primitive crossfeed options. Darin Fong’s OOYH software is a good start. http://www.head-fi.org/t/689299/out-of-your-head-new-virtual-surround-simulator Here’s my own humble attempt: http://www.head-fi.org/t/555263/foobar2000-dolby-headphone-config-comment-discuss/810#post_12496793

4. A whole industry of consumer-oriented audio engineering needs to be built from the ground up. For loudspeaker systems it entails proper room setup and speaker calibration by trained professionals rather than end-users all trying to do their own thing. For headphone systems it entails widespread adoption of HRTF measurements a la those done for the Smyth Realiser: http://www.head-fi.org/t/418401/long-awaited-smyth-svs-realiser-now-available-for-purchase

The latter would be an alternative to (3) and Smyth Realiser is in the High-End audio forum for good reason. Most every Realiser user would tell you it makes a joke of all talk of headphone “soundstage” and “realism” on conventional headphone systems. Individual HRTF measurements are necessary because of the wild acoustic variations between individuals when wearing headphones.

5. Audiophile headphones should come standard with compensation curves for arriving at a neutral reference. For (4) the HRTFs should be recorded as deviations from the KEMAR dummy head reference, so that corrections can be applied to the compensation curve to arrive at the studio-intended sound for every listener, using whatever headphones. Software to apply such corrections should come as standard on any audiophile music player for portable use.

But as you can see, every point involves sweeping changes to the audio industry, I’m not sure there’s any money to be made from it, and it seems obvious that the majority of the target market won’t even appreciate the reasons behind such changes if and when they are proposed. It needs to be proposed as a whole new system for everything from recording, mastering to playback. Everyone would have their own slightly different version of the underlying ideas and it would be very difficult to arrive at a universally adopted standard.

Posted in Head-Fi Posts | Tagged , , , | 1 Comment

Re: Onkyo DP-X1 EQ

Quote:

Originally Posted by AudioMan2013 View Post
It has an EQ but it doesn’t scale well.  If you start increasing frequencies the signal starts clipping.  This is a sign of having little digital headroom in their design.
Quote:

Originally Posted by AudioMan2013 View Post
I have installed 2 apps, Tidal and USB Audio Player Pro.  The EQ doesn’t scale well because the player has very little digital headroom.  If you try to increase frequencies it is very easy to clip the signal.  For example, sometimes I an in the mood for bass and a slight to moderate increase in the lower frequencies causes clipping and therefore major distortion. My Cowon allows for a huge increase in bass that allows me to be a bass head.  I have 2 great headphones for bass – the Fostex TH900 and the JVC HA-SZ2000.  The worst EQ dap I own is the Ibasso DX100.  This Onkyo is the second worse. Heck, even my old antique Sony NW-HD1 had a more functional EQ and that was over 10 years ago.
Quote:

Originally Posted by masterpfa View Post
I agree I have not fared to well when trying to use the EQ on my DP-X1 and as a result very seldom use it all much these days

There is NO digital headroom in modern music recordings. The only way to EQ is down, unless you tweak your system using things like ReplayGain to lower the digital signal before processing. But audiophiles hate stuff like that (“where’d my bit perfect recording go??”) so Onkyo doesn’t do it.

Other alternatives:
1. Automatically apply a fixed negative preamp to the EQ (makes the music quieter the moment you push the EQ button, everybody thinks the EQ makes the sound muffled. FiiO did this and nobody uses their EQ either)
2. Quietly apply a fixed negative preamp ALL the time, even when EQ is off. Again this throws away bit-perfectness.
3. Do (1), but automatically increase the master volume to compensate. FiiO did this for the X1 and people complained why the max volume decreases with EQ on.:facepalm: Still one of the best solutions so far. But, I don’t think this is possible to do (or at least very complicated) on an Android platform.
4. Quietly apply a digital soft limiter to the EQ, so you don’t hear obvious distortion. Now a digital soft limiter shouldn’t change the signal at all when you’re not clipping, but when you’re clipping the soft limiter does its work by dynamically compressing your music further. Only a band-aid to the problem to be sure.

In short, EQ is between a rock and a hard place when facing people who expect it to break the laws of digital audio. The only solution is to learn how to use it properly.

You can easily bring the whole EQ curve down below clipping on the Onkyo app by dragging down on an empty part of the graph (i.e. not dragging directly on the curve).

Also, the EQ / spectrum graph flashes red when clipping happens in real time. That shows you when you need to bring the EQ setting down.

For comparing the flat output vs an EQed setting (which would be quieter, if you follow the instructions above to avoid clipping), I advise adding a new Preset that’s just the flat default line but dragged down to match the EQed volume. This way you can compare the settings at equal volume and let your ears judge more fairly whether the EQ setting is making an improvement or not. Yes you’re not doing bit-perfect output with the lowered flat setting, but if you believed that compromises audio quality you should not be EQing in the first place… tongue.gif

Quote:

Originally Posted by AudioMan2013 View Post

I know there there is no digital headroom in the music/audio file, I was speaking about the dap itself and the playback software. What you are saying basically that in all digital audio platforms a software based eq cannot be applied that works well. That is not true. There is a software work around for implementing an eq properly and the same method is used for digital volume control so one doesnt start losing bits when listening at low volumes. In the audio processing, by adding extra bits would allow for plenty of digital headroom. Instead of processing in 16 or 24 bit registers, it would process in higher bit depth registers before sending the data to the dacs. ESS uses pretty much the same method for its volume control.

That solution which you propose, is invalid when there are also 32bit and 64bit tracks to be decoded bit-perfectly. Also, any such padding in the digital domain would mean losing dBs of S/N ratio at the analog output. Of course, it’s insanity to think that 64 bit audio needs to be decoded bit-perfectly to avoid audible loss, and stupid to think that in this day and age of S/N ratio well over 100dB losing a few dBs to make room for EQ would hurt anything, but that’s a story for another day… 900x900px-LL-00b3150b_yaywhatever-avatar.jpeg

Posted in Head-Fi Posts | Leave a comment

Measure frequency response by ear–the why

Headphone frequency response measurements have been around for ages.

Headphone frequency response measurements suck.

Now, don’t get me wrong, I’m a hard-nosed objectivist when it comes to audio in every other way.  I don’t cringe when DBT experiments show that you can replace coils of speaker wire thick as a snake and worth more than their weight in gold with rusty coat hangers with no difference heard by anybody.  I believe that modern audio performance can be characterized almost in their entirety by frequency response, phase response and nonlinear distortion–and most items in the audio chain perform admirably well on all counts, with the most notable exception being speaker and headphone frequency response.  (Phase and nonlinear distortion are almost more evident in transducers than in any other part of the chain, but have been shown to be controlled to inaudible levels through audio frequencies by many good headphones and speakers, some not even that expensive. (understatement alert!))

Nevertheless, everyone knows that headphone frequency response measurements suck.  Nobody can agree on what coupler, artificial ear or dummy head to measure them with;  after measurement, nobody can agree on the best compensation curve for massaging the graph;  but no matter what choices were made on the above two counts, everybody can agree that EQing the resulting graph to flat does not a good sounding pair of headphones make.  As an experiment, I dare you to try doing so with any one of the headphones in extensive headphone measurement database at innerfidelity that you own.

The problem, as far as I can see, is that we’re still not very good at simulating the mechanics and acoustics of a real human ear using dummy ears or dummy heads.  And probing a real ear is also a non-starter as the acoustics of any probe location other than the eardrum itself is different.  Even if someone were to sacrifice oneself for science and audio and replace one’s eardrums with microphones, said microphones wouldn’t match the acoustic impedance response of one’s eardrums.  Massaging flawed measurement data with compensation curves will not yield accurate results either, as the required compensation curve would change with each model of earphones, its acoustic coupling with the artificial ear being different from that with the real thing, this difference being unique for each model of earphones.

Feel free to correct me on any of the above–I’m not used to “bashing science” myself.

Thus, as an alternative to the above, I hereby propose that we measure frequency response of headphones by ear.  As for how this is even possible or how this can possibly be any good, stay tuned for the next update–“the how”!

As an early spoiler, take a look at the listener feedback on an example of headphone frequency response correction utilizing the result of FR measurements by ear.

Posted in FR measurements by ear | Tagged | Leave a comment

New Site Direction

This site just changed from blogging about a yuri dating sim to headphone audio o.O Site content to be added shortly!

Posted in Uncategorized | Leave a comment

And it’s all done!

Get it here:

http://petalsgarden.axypb.net/2012/12/17/sh6/

A full list of credits is also on that page.

Posted in Sono Hanabira, Sono Hanabira 6 translation | Tagged , , , | Leave a comment

Beta proofreading patch out!

The draft translations were done in 3 May 2012 and passed off to checking and editing.  Unfortunately editing has been going slower than expected.  In the meantime we’ve received numerous comments asking about the progress of the English patch.

Today we’ve decided to release a beta proofreading patch!

Continue reading

Posted in Sono Hanabira, Sono Hanabira 6 translation | Tagged , , , | 2 Comments

Getting there

As I mentioned in the previous post, I was only responsible for the first translation, other members of the team are responsible for translation checking, editing and quality check.  For the last few weeks these have been going rather slowly (hopefully not because I made a mess of the translation!) but suddenly @fkeroge posted yesterday that all chapters have been checked and edited.  With that leap of progress we’re much closer to releasing this game in English than before!

Posted in Sono Hanabira, Sono Hanabira 6 translation | Tagged , , , , | 19 Comments

Draft translations all done!

Well it’s been a hectic few days but it seems I found some time to finish the draft translations after all.  All chapters have now been translated.  Now we’ll have to wait… Continue reading

Posted in Sono Hanabira, Sono Hanabira 6 translation | Tagged , , , , | 33 Comments

Introducing the team

[Since taking up this project early this month I have been able to progress with the translation at a healthy pace so far.  Unfortunately I’m afraid that Real Life may overtake me in a big way in the coming days and I won’t be update nearly as often.  I’d promised a “colorful introduction” for the people joining the translation team, but right now I’m just trying to get as much translation done as possible before RL hits the fan.  So I’d just list out the people on the team in a brief blurb for now and update this post as I get to know more about the team members and find the time to write creatively.] Continue reading

Posted in Sono Hanabira 6 translation | Tagged , , , | 2 Comments