首页 \ 问答 \ 如何向后播放音频?(How to play audio backwards?)

如何向后播放音频?(How to play audio backwards?)

有些人建议从头到尾读取音频数据并创建从头到尾写的副本,然后简单地播放反转的音频数据。

是否有iOS的现有示例如何完成?

我找到了一个名为MixerHost的示例项目,该项目在某些时候使用AudioUnitSampleType来保存从文件中读取的音频数据,并将其分配给缓冲区。

这被定义为:

typedef SInt32 AudioUnitSampleType;
#define kAudioUnitSampleFractionBits 24

根据Apple的说法:

用于音频单元的规范音频样本类型和iPhone OS中的其他音频处理是具有8.24位定点样本的非交织线性PCM。

换句话说,它拥有非交错的线性PCM音频数据。

但我无法弄清楚这些数据的读取位置以及存储位置。 这是加载音频数据并缓冲它的代码:

- (void) readAudioFilesIntoMemory {

    for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile)  {

        NSLog (@"readAudioFilesIntoMemory - file %i", audioFile);

        // Instantiate an extended audio file object.
        ExtAudioFileRef audioFileObject = 0;

        // Open an audio file and associate it with the extended audio file object.
        OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject);

        if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;}

        // Get the audio file's length in frames.
        UInt64 totalFramesInFile = 0;
        UInt32 frameLengthPropertySize = sizeof (totalFramesInFile);

        result =    ExtAudioFileGetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_FileLengthFrames,
                        &frameLengthPropertySize,
                        &totalFramesInFile
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;}

        // Assign the frame count to the soundStructArray instance variable
        soundStructArray[audioFile].frameCount = totalFramesInFile;

        // Get the audio file's number of channels.
        AudioStreamBasicDescription fileAudioFormat = {0};
        UInt32 formatPropertySize = sizeof (fileAudioFormat);

        result =    ExtAudioFileGetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_FileDataFormat,
                        &formatPropertySize,
                        &fileAudioFormat
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;}

        UInt32 channelCount = fileAudioFormat.mChannelsPerFrame;

        // Allocate memory in the soundStructArray instance variable to hold the left channel, 
        //    or mono, audio data
        soundStructArray[audioFile].audioDataLeft =
            (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));

        AudioStreamBasicDescription importFormat = {0};
        if (2 == channelCount) {

            soundStructArray[audioFile].isStereo = YES;
            // Sound is stereo, so allocate memory in the soundStructArray instance variable to  
            //    hold the right channel audio data
            soundStructArray[audioFile].audioDataRight =
                (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
            importFormat = stereoStreamFormat;

        } else if (1 == channelCount) {

            soundStructArray[audioFile].isStereo = NO;
            importFormat = monoStreamFormat;

        } else {

            NSLog (@"*** WARNING: File format not supported - wrong number of channels");
            ExtAudioFileDispose (audioFileObject);
            return;
        }

        // Assign the appropriate mixer input bus stream data format to the extended audio 
        //        file object. This is the format used for the audio data placed into the audio 
        //        buffer in the SoundStruct data structure, which is in turn used in the 
        //        inputRenderCallback callback function.

        result =    ExtAudioFileSetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_ClientDataFormat,
                        sizeof (importFormat),
                        &importFormat
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;}

        // Set up an AudioBufferList struct, which has two roles:
        //
        //        1. It gives the ExtAudioFileRead function the configuration it 
        //            needs to correctly provide the data to the buffer.
        //
        //        2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so 
        //            that audio data obtained from disk using the ExtAudioFileRead function
        //            goes to that buffer

        // Allocate memory for the buffer list struct according to the number of 
        //    channels it represents.
        AudioBufferList *bufferList;

        bufferList = (AudioBufferList *) malloc (
            sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
        );

        if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;}

        // initialize the mNumberBuffers member
        bufferList->mNumberBuffers = channelCount;

        // initialize the mBuffers member to 0
        AudioBuffer emptyBuffer = {0};
        size_t arrayIndex;
        for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
            bufferList->mBuffers[arrayIndex] = emptyBuffer;
        }

        // set up the AudioBuffer structs in the buffer list
        bufferList->mBuffers[0].mNumberChannels  = 1;
        bufferList->mBuffers[0].mDataByteSize    = totalFramesInFile * sizeof (AudioUnitSampleType);
        bufferList->mBuffers[0].mData            = soundStructArray[audioFile].audioDataLeft;

        if (2 == channelCount) {
            bufferList->mBuffers[1].mNumberChannels  = 1;
            bufferList->mBuffers[1].mDataByteSize    = totalFramesInFile * sizeof (AudioUnitSampleType);
            bufferList->mBuffers[1].mData            = soundStructArray[audioFile].audioDataRight;
        }

        // Perform a synchronous, sequential read of the audio data out of the file and
        //    into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members.
        UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile;

        result = ExtAudioFileRead (
                     audioFileObject,
                     &numberOfPacketsToRead,
                     bufferList
                 );

        free (bufferList);

        if (noErr != result) {

            [self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result];

            // If reading from the file failed, then free the memory for the sound buffer.
            free (soundStructArray[audioFile].audioDataLeft);
            soundStructArray[audioFile].audioDataLeft = 0;

            if (2 == channelCount) {
                free (soundStructArray[audioFile].audioDataRight);
                soundStructArray[audioFile].audioDataRight = 0;
            }

            ExtAudioFileDispose (audioFileObject);            
            return;
        }

        NSLog (@"Finished reading file %i into memory", audioFile);

        // Set the sample index to zero, so that playback starts at the 
        //    beginning of the sound.
        soundStructArray[audioFile].sampleNumber = 0;

        // Dispose of the extended audio file object, which also
        //    closes the associated file.
        ExtAudioFileDispose (audioFileObject);
    }
}

哪个部分包含必须反转的音频样本数组? 是AudioUnitSampleType吗?

bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;

注意:audioDataLeft定义为AudioUnitSampleType,它是一个SInt32但不是一个数组。

我在Core Audio Mailing列表中找到了一条线索:

嗯,据我所知,与iPh * n *无关(除非某些音频API被省略 - 我不是该程序的成员)。 AFAIR,AudioFile.h和ExtendedAudioFile.h应该为您提供读取或写入caf并访问其流/通道所需的内容。 基本上,您想要向后读取每个通道/流,因此,如果您不需要音频文件的属性,一旦您掌握了该通道的数据(假设它不是压缩格式),它就非常简单。 考虑到caf可以表示的格式数量,这可能需要比您想象的更多行代码。 一旦掌握了未压缩的数据,就应该像反转字符串一样简单。 然后你当然会用反转的数据替换文件的数据,或者你可以只输入音频输出(或者你发送反向信号的任何地方)读取你向后的任何流。

这是我尝试过的,但是当我将反向缓冲区分配给两个通道的mData时,我什么也听不见:

AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft;
AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
UInt64 j = 0;
for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) {
    reversedData[j] = leftData[i];
    j++;
}

Some people suggested to read the audio data from end to start and create a copy written from start to end, and then simply play that reversed audio data.

Are there existing examples for iOS how this is done?

I found an example project called MixerHost, which at some point uses an AudioUnitSampleType holding the audio data that has been read from file, and assigning it to a buffer.

This is defined as:

typedef SInt32 AudioUnitSampleType;
#define kAudioUnitSampleFractionBits 24

And according to Apple:

The canonical audio sample type for audio units and other audio processing in iPhone OS is noninterleaved linear PCM with 8.24-bit fixed-point samples.

So in other words it holds noninterleaved linear PCM audio data.

But I can't figure out where this data is beeing read in, and where it is stored. Here's the code that loads the audio data and buffers it:

- (void) readAudioFilesIntoMemory {

    for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile)  {

        NSLog (@"readAudioFilesIntoMemory - file %i", audioFile);

        // Instantiate an extended audio file object.
        ExtAudioFileRef audioFileObject = 0;

        // Open an audio file and associate it with the extended audio file object.
        OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject);

        if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;}

        // Get the audio file's length in frames.
        UInt64 totalFramesInFile = 0;
        UInt32 frameLengthPropertySize = sizeof (totalFramesInFile);

        result =    ExtAudioFileGetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_FileLengthFrames,
                        &frameLengthPropertySize,
                        &totalFramesInFile
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;}

        // Assign the frame count to the soundStructArray instance variable
        soundStructArray[audioFile].frameCount = totalFramesInFile;

        // Get the audio file's number of channels.
        AudioStreamBasicDescription fileAudioFormat = {0};
        UInt32 formatPropertySize = sizeof (fileAudioFormat);

        result =    ExtAudioFileGetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_FileDataFormat,
                        &formatPropertySize,
                        &fileAudioFormat
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;}

        UInt32 channelCount = fileAudioFormat.mChannelsPerFrame;

        // Allocate memory in the soundStructArray instance variable to hold the left channel, 
        //    or mono, audio data
        soundStructArray[audioFile].audioDataLeft =
            (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));

        AudioStreamBasicDescription importFormat = {0};
        if (2 == channelCount) {

            soundStructArray[audioFile].isStereo = YES;
            // Sound is stereo, so allocate memory in the soundStructArray instance variable to  
            //    hold the right channel audio data
            soundStructArray[audioFile].audioDataRight =
                (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
            importFormat = stereoStreamFormat;

        } else if (1 == channelCount) {

            soundStructArray[audioFile].isStereo = NO;
            importFormat = monoStreamFormat;

        } else {

            NSLog (@"*** WARNING: File format not supported - wrong number of channels");
            ExtAudioFileDispose (audioFileObject);
            return;
        }

        // Assign the appropriate mixer input bus stream data format to the extended audio 
        //        file object. This is the format used for the audio data placed into the audio 
        //        buffer in the SoundStruct data structure, which is in turn used in the 
        //        inputRenderCallback callback function.

        result =    ExtAudioFileSetProperty (
                        audioFileObject,
                        kExtAudioFileProperty_ClientDataFormat,
                        sizeof (importFormat),
                        &importFormat
                    );

        if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;}

        // Set up an AudioBufferList struct, which has two roles:
        //
        //        1. It gives the ExtAudioFileRead function the configuration it 
        //            needs to correctly provide the data to the buffer.
        //
        //        2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so 
        //            that audio data obtained from disk using the ExtAudioFileRead function
        //            goes to that buffer

        // Allocate memory for the buffer list struct according to the number of 
        //    channels it represents.
        AudioBufferList *bufferList;

        bufferList = (AudioBufferList *) malloc (
            sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
        );

        if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;}

        // initialize the mNumberBuffers member
        bufferList->mNumberBuffers = channelCount;

        // initialize the mBuffers member to 0
        AudioBuffer emptyBuffer = {0};
        size_t arrayIndex;
        for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
            bufferList->mBuffers[arrayIndex] = emptyBuffer;
        }

        // set up the AudioBuffer structs in the buffer list
        bufferList->mBuffers[0].mNumberChannels  = 1;
        bufferList->mBuffers[0].mDataByteSize    = totalFramesInFile * sizeof (AudioUnitSampleType);
        bufferList->mBuffers[0].mData            = soundStructArray[audioFile].audioDataLeft;

        if (2 == channelCount) {
            bufferList->mBuffers[1].mNumberChannels  = 1;
            bufferList->mBuffers[1].mDataByteSize    = totalFramesInFile * sizeof (AudioUnitSampleType);
            bufferList->mBuffers[1].mData            = soundStructArray[audioFile].audioDataRight;
        }

        // Perform a synchronous, sequential read of the audio data out of the file and
        //    into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members.
        UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile;

        result = ExtAudioFileRead (
                     audioFileObject,
                     &numberOfPacketsToRead,
                     bufferList
                 );

        free (bufferList);

        if (noErr != result) {

            [self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result];

            // If reading from the file failed, then free the memory for the sound buffer.
            free (soundStructArray[audioFile].audioDataLeft);
            soundStructArray[audioFile].audioDataLeft = 0;

            if (2 == channelCount) {
                free (soundStructArray[audioFile].audioDataRight);
                soundStructArray[audioFile].audioDataRight = 0;
            }

            ExtAudioFileDispose (audioFileObject);            
            return;
        }

        NSLog (@"Finished reading file %i into memory", audioFile);

        // Set the sample index to zero, so that playback starts at the 
        //    beginning of the sound.
        soundStructArray[audioFile].sampleNumber = 0;

        // Dispose of the extended audio file object, which also
        //    closes the associated file.
        ExtAudioFileDispose (audioFileObject);
    }
}

Which part contains the array of audio samples which have to be reversed? Is it the AudioUnitSampleType?

bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;

Note: audioDataLeft is defined as an AudioUnitSampleType, which is an SInt32 but not an array.

I found a clue in a Core Audio Mailing list:

Well, nothing to do with iPh*n* as far as I know (unless some audio API has been omitted -- I am not a member of that program). AFAIR, AudioFile.h and ExtendedAudioFile.h should provide you with what you need to read or write a caf and access its streams/channels. Basically, you want to read each channel/stream backwards, so, if you don't need properties of the audio file it is pretty straightforward once you have a handle on that channel's data, assuming it is not in a compressed format. Considering the number of formats a caf can represent, this could take a few more lines of code than you're thinking. Once you have a handle on uncompressed data, it should be about as easy as reversing a string. Then you would of course replace the file's data with the reversed data, or you could just feed the audio output (or wherever you're sending the reversed signal) reading whatever stream you have backwards.

This is what I tried, but when I assign my reversed buffer to the mData of both channels, I hear nothing:

AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft;
AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
UInt64 j = 0;
for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) {
    reversedData[j] = leftData[i];
    j++;
}

原文:https://stackoverflow.com/questions/12027003
更新时间:2023-04-12 10:04

最满意答案

您可以使用Custom SoapExtension 。 在ProcessMessage()方法中,您可以看到4个事件,序列化之前和之后以及反序列化之前和之后。 在Befores中启动秒表,在Afters上结束,然后将时间存储在某个地方。


You could use a Custom SoapExtension. In the ProcessMessage() method, you can see 4 events, Before and After serialization and Before and After deserialization. Start a stopwatch in the Befores, end it on the Afters, then store the time away somewhere.

相关问答

更多

相关文章

更多

最新问答

更多
  • h2元素推动其他h2和div。(h2 element pushing other h2 and div down. two divs, two headers, and they're wrapped within a parent div)
  • 创建一个功能(Create a function)
  • 我投了份简历,是电脑编程方面的学徒,面试时说要培训三个月,前面
  • PDO语句不显示获取的结果(PDOstatement not displaying fetched results)
  • Qt冻结循环的原因?(Qt freezing cause of the loop?)
  • TableView重复youtube-api结果(TableView Repeating youtube-api result)
  • 如何使用自由职业者帐户登录我的php网站?(How can I login into my php website using freelancer account? [closed])
  • SQL Server 2014版本支持的最大数据库数(Maximum number of databases supported by SQL Server 2014 editions)
  • 我如何获得DynamicJasper 3.1.2(或更高版本)的Maven仓库?(How do I get the maven repository for DynamicJasper 3.1.2 (or higher)?)
  • 以编程方式创建UITableView(Creating a UITableView Programmatically)
  • 如何打破按钮上的生命周期循环(How to break do-while loop on button)
  • C#使用EF访问MVC上的部分类的自定义属性(C# access custom attributes of a partial class on MVC with EF)
  • 如何获得facebook app的publish_stream权限?(How to get publish_stream permissions for facebook app?)
  • 如何防止调用冗余函数的postgres视图(how to prevent postgres views calling redundant functions)
  • Sql Server在欧洲获取当前日期时间(Sql Server get current date time in Europe)
  • 设置kotlin扩展名(Setting a kotlin extension)
  • 如何并排放置两个元件?(How to position two elements side by side?)
  • 如何在vim中启用python3?(How to enable python3 in vim?)
  • 在MySQL和/或多列中使用多个表用于Rails应用程序(Using multiple tables in MySQL and/or multiple columns for a Rails application)
  • 如何隐藏谷歌地图上的登录按钮?(How to hide the Sign in button from Google maps?)
  • Mysql左连接旋转90°表(Mysql Left join rotate 90° table)
  • dedecms如何安装?
  • 在哪儿学计算机最好?
  • 学php哪个的书 最好,本人菜鸟
  • 触摸时不要突出显示表格视图行(Do not highlight table view row when touched)
  • 如何覆盖错误堆栈getter(How to override Error stack getter)
  • 带有ImageMagick和许多图像的GIF动画(GIF animation with ImageMagick and many images)
  • USSD INTERFACE - > java web应用程序通信(USSD INTERFACE -> java web app communication)
  • 电脑高中毕业学习去哪里培训
  • 正则表达式验证SMTP响应(Regex to validate SMTP Responses)