top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Android audio framework quality issue?

+1 vote
523 views

Current AudioFlinger at stereo 16-bit PCM at 44.1 kHz( ˆ’32,768 to +32,767) is a little bit too out-dated. The audio quality is degraded, especially in lossless have been downsample. In addition, more and more HQ record is launched, not only at CD level. So, it is necessary to get into 24-bit 96kHz( ˆ’8,388,608 to +8,388,607), also get advantage on high SNR too.

I don't know the major obstacle of being upgrade to hi-res audio. I guess hardware limit, bad AD/DA converter, hard to implement on software side,etc.

posted Jan 12, 2016 by anonymous

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

+1 vote

Beginning with Android 5.0 (Lollipop) and continuing through Android 6.0 (Marshmallow) and beyond, the internal audio data path is gradually being widened along several dimensions: bit depth, dynamic range, sample rate, and channel count.

Many internal calculations are now performed in single-precision floating-point, which has a greater effective bit depth of 24-25 bits and much wider dynamic range than before. Sample rates up to 96kHz or 192 kHz are supported provided the endpoints
(source and sink) specify the appropriate sample rate in their configuration. The resampler implementations have been rewritten for improved quality and speed. There is now better support for multi-channel, especially for indexed (non-positional) audio and on input side.

However, these improvements are not complete as of 6.0. There is still more work to do, for example the effects and some of the file parsers and codecs (encoders and decoders) are not yet upgraded, and multi-channel over OpenSL ES API is not yet as capable as the Java API.

It is up to each device OEM to configure the endpoints, so not all endpoints will make use of the wider paths.

Historically, floating-point was slower than integer arithmetic. But in modern processors the use of floating-point for internal math
does not significantly decrease CPU performance, and in fact in some cases it can be faster than integer math.

However, higher sample rates (e.g. 96 kHz and above) do use more CPU and power than lower sample rates (44.1 or 48 kHz). Higher sample rates are generally accepted to be useful during pro audio recording and editing. The value of high sample rates for playback/listening by ordinary users is controversial, especially if the endpoints are analog or have the kind of small transducers typical on mobile devices. Given the higher power consumption and controversy, device OEMs may choose to not implement high sample rates for the local transducers and only support high sample rates over USB.

Here are links to some more resources on these topics:

Will it float? The glory and shame of floating-point audio

Google I/O 2014 - Building great multi-media experiences on Android
(0:42 to 3:57 are on the wider data path)

Data formats
https://source.android.com/devices/audio/data_formats.html

High-Performance audio on Android
https://googlesamples.github.io/android-audio-high-performance/

PS: Looks you have posted this on Android group, picked the answer from there only.

answer Jan 12, 2016 by Ahmed Patel
Do you know of any Android smartphone capable of sampling audio from the microphone at 96 kHz or higher rates?
Similar Questions
+4 votes

I am trying to record an audio in android device. But I want mute my device in code if did not receive gain. If anybody have idea about this please share it. If connected earphones I can MUTE by pressing mute button but I want edit code and mute it by default.

+2 votes

I am using the AOA 2.0 protocol to transfer the Mobile media audio to my head unit. If the same mobile is not connected via BT to the HU, the audio transfer happens perfectly fine with AOA 2.0. But if the mobile is paired using BT to HU the media audio is routed through A2DP. Is this a generic behavior?

Is there a specific rule which protocol to be used when there is multiple protocol available for Audio routing? Or is it a device dependent?

+1 vote

I'm looking for a way to stream different audio streams to different output devices. For example Phone Call going to the Bluetooth but my MP3 player going to the speaker. Is it possible in Android?

Is there any hack in any of the android/alsa layer that can do the trick?

+1 vote

I'm newbie in Android development and I must say, that I'm a little bit confused by its audio framework. I work on porting Android to our custom board and I have almost everything working except of GSM audio.
I suspect that it should be relatively easy to mark some stream as output and some as input for voice calls, but I cannot get into it. I have two PCM interfaces on our SoC, one is used for audio codec and provides speaker and microphone and it is working for me as audio.primary and Android applications can use it. The second one is used for GSM audio interface and I have working it on kernel ALSA level (I can use command line aplay to play and capture sounds). Now I need to connect these two together - to get sound from GSM into speaker and opposite from microphone to GSM.

I think that audio_policy.conf file has to do something with that, but I checked audio.h header and don't know if AUDIO_DEVICE_IN_COMMUNICATION or AUDIO_DEVICE_IN_VOICE_CALL should be used for GSM, what output device, etc.

I know that this description is a bit messed, but I hope that someone can point me to right direction and my question will be clearer then.

+1 vote

Being a learner. I would like to know that "my code sample below crash the application? How would we can modify the code to avoid this problem?", which has become difficult to debug. I hope any tech developer help me in resolving this issue

Intent sendIntent = new Intent();
sendIntent.setAction(Intent.ACTION_SEND);
sendIntent.putExtra(Intent.EXTRA_TEXT, textMessage);
sendIntent.setType(HTTP.PLAIN_TEXT_TYPE); // "text/plain" MIME type
startActivity(sendIntent);

...