如何向后播放音频?(How to play audio backwards?)
有些人建议从头到尾读取音频数据并创建从头到尾写的副本,然后简单地播放反转的音频数据。
是否有iOS的现有示例如何完成?
我找到了一个名为MixerHost的示例项目,该项目在某些时候使用AudioUnitSampleType来保存从文件中读取的音频数据,并将其分配给缓冲区。
这被定义为:
typedef SInt32 AudioUnitSampleType; #define kAudioUnitSampleFractionBits 24
根据Apple的说法:
用于音频单元的规范音频样本类型和iPhone OS中的其他音频处理是具有8.24位定点样本的非交织线性PCM。
换句话说,它拥有非交错的线性PCM音频数据。
但我无法弄清楚这些数据的读取位置以及存储位置。 这是加载音频数据并缓冲它的代码:
- (void) readAudioFilesIntoMemory { for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile) { NSLog (@"readAudioFilesIntoMemory - file %i", audioFile); // Instantiate an extended audio file object. ExtAudioFileRef audioFileObject = 0; // Open an audio file and associate it with the extended audio file object. OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject); if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;} // Get the audio file's length in frames. UInt64 totalFramesInFile = 0; UInt32 frameLengthPropertySize = sizeof (totalFramesInFile); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileLengthFrames, &frameLengthPropertySize, &totalFramesInFile ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;} // Assign the frame count to the soundStructArray instance variable soundStructArray[audioFile].frameCount = totalFramesInFile; // Get the audio file's number of channels. AudioStreamBasicDescription fileAudioFormat = {0}; UInt32 formatPropertySize = sizeof (fileAudioFormat); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileDataFormat, &formatPropertySize, &fileAudioFormat ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;} UInt32 channelCount = fileAudioFormat.mChannelsPerFrame; // Allocate memory in the soundStructArray instance variable to hold the left channel, // or mono, audio data soundStructArray[audioFile].audioDataLeft = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); AudioStreamBasicDescription importFormat = {0}; if (2 == channelCount) { soundStructArray[audioFile].isStereo = YES; // Sound is stereo, so allocate memory in the soundStructArray instance variable to // hold the right channel audio data soundStructArray[audioFile].audioDataRight = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); importFormat = stereoStreamFormat; } else if (1 == channelCount) { soundStructArray[audioFile].isStereo = NO; importFormat = monoStreamFormat; } else { NSLog (@"*** WARNING: File format not supported - wrong number of channels"); ExtAudioFileDispose (audioFileObject); return; } // Assign the appropriate mixer input bus stream data format to the extended audio // file object. This is the format used for the audio data placed into the audio // buffer in the SoundStruct data structure, which is in turn used in the // inputRenderCallback callback function. result = ExtAudioFileSetProperty ( audioFileObject, kExtAudioFileProperty_ClientDataFormat, sizeof (importFormat), &importFormat ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;} // Set up an AudioBufferList struct, which has two roles: // // 1. It gives the ExtAudioFileRead function the configuration it // needs to correctly provide the data to the buffer. // // 2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so // that audio data obtained from disk using the ExtAudioFileRead function // goes to that buffer // Allocate memory for the buffer list struct according to the number of // channels it represents. AudioBufferList *bufferList; bufferList = (AudioBufferList *) malloc ( sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1) ); if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;} // initialize the mNumberBuffers member bufferList->mNumberBuffers = channelCount; // initialize the mBuffers member to 0 AudioBuffer emptyBuffer = {0}; size_t arrayIndex; for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) { bufferList->mBuffers[arrayIndex] = emptyBuffer; } // set up the AudioBuffer structs in the buffer list bufferList->mBuffers[0].mNumberChannels = 1; bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft; if (2 == channelCount) { bufferList->mBuffers[1].mNumberChannels = 1; bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[1].mData = soundStructArray[audioFile].audioDataRight; } // Perform a synchronous, sequential read of the audio data out of the file and // into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members. UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile; result = ExtAudioFileRead ( audioFileObject, &numberOfPacketsToRead, bufferList ); free (bufferList); if (noErr != result) { [self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result]; // If reading from the file failed, then free the memory for the sound buffer. free (soundStructArray[audioFile].audioDataLeft); soundStructArray[audioFile].audioDataLeft = 0; if (2 == channelCount) { free (soundStructArray[audioFile].audioDataRight); soundStructArray[audioFile].audioDataRight = 0; } ExtAudioFileDispose (audioFileObject); return; } NSLog (@"Finished reading file %i into memory", audioFile); // Set the sample index to zero, so that playback starts at the // beginning of the sound. soundStructArray[audioFile].sampleNumber = 0; // Dispose of the extended audio file object, which also // closes the associated file. ExtAudioFileDispose (audioFileObject); } }
哪个部分包含必须反转的音频样本数组? 是
AudioUnitSampleType
吗?bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;
注意:audioDataLeft定义为AudioUnitSampleType,它是一个SInt32但不是一个数组。
我在Core Audio Mailing列表中找到了一条线索:
嗯,据我所知,与iPh * n *无关(除非某些音频API被省略 - 我不是该程序的成员)。 AFAIR,AudioFile.h和ExtendedAudioFile.h应该为您提供读取或写入caf并访问其流/通道所需的内容。 基本上,您想要向后读取每个通道/流,因此,如果您不需要音频文件的属性,一旦您掌握了该通道的数据(假设它不是压缩格式),它就非常简单。 考虑到caf可以表示的格式数量,这可能需要比您想象的更多行代码。 一旦掌握了未压缩的数据,就应该像反转字符串一样简单。 然后你当然会用反转的数据替换文件的数据,或者你可以只输入音频输出(或者你发送反向信号的任何地方)读取你向后的任何流。
这是我尝试过的,但是当我将反向缓冲区分配给两个通道的mData时,我什么也听不见:
AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft; AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); UInt64 j = 0; for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) { reversedData[j] = leftData[i]; j++; }
Some people suggested to read the audio data from end to start and create a copy written from start to end, and then simply play that reversed audio data.
Are there existing examples for iOS how this is done?
I found an example project called MixerHost, which at some point uses an
AudioUnitSampleType
holding the audio data that has been read from file, and assigning it to a buffer.This is defined as:
typedef SInt32 AudioUnitSampleType; #define kAudioUnitSampleFractionBits 24
And according to Apple:
The canonical audio sample type for audio units and other audio processing in iPhone OS is noninterleaved linear PCM with 8.24-bit fixed-point samples.
So in other words it holds noninterleaved linear PCM audio data.
But I can't figure out where this data is beeing read in, and where it is stored. Here's the code that loads the audio data and buffers it:
- (void) readAudioFilesIntoMemory { for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile) { NSLog (@"readAudioFilesIntoMemory - file %i", audioFile); // Instantiate an extended audio file object. ExtAudioFileRef audioFileObject = 0; // Open an audio file and associate it with the extended audio file object. OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject); if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;} // Get the audio file's length in frames. UInt64 totalFramesInFile = 0; UInt32 frameLengthPropertySize = sizeof (totalFramesInFile); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileLengthFrames, &frameLengthPropertySize, &totalFramesInFile ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;} // Assign the frame count to the soundStructArray instance variable soundStructArray[audioFile].frameCount = totalFramesInFile; // Get the audio file's number of channels. AudioStreamBasicDescription fileAudioFormat = {0}; UInt32 formatPropertySize = sizeof (fileAudioFormat); result = ExtAudioFileGetProperty ( audioFileObject, kExtAudioFileProperty_FileDataFormat, &formatPropertySize, &fileAudioFormat ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;} UInt32 channelCount = fileAudioFormat.mChannelsPerFrame; // Allocate memory in the soundStructArray instance variable to hold the left channel, // or mono, audio data soundStructArray[audioFile].audioDataLeft = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); AudioStreamBasicDescription importFormat = {0}; if (2 == channelCount) { soundStructArray[audioFile].isStereo = YES; // Sound is stereo, so allocate memory in the soundStructArray instance variable to // hold the right channel audio data soundStructArray[audioFile].audioDataRight = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); importFormat = stereoStreamFormat; } else if (1 == channelCount) { soundStructArray[audioFile].isStereo = NO; importFormat = monoStreamFormat; } else { NSLog (@"*** WARNING: File format not supported - wrong number of channels"); ExtAudioFileDispose (audioFileObject); return; } // Assign the appropriate mixer input bus stream data format to the extended audio // file object. This is the format used for the audio data placed into the audio // buffer in the SoundStruct data structure, which is in turn used in the // inputRenderCallback callback function. result = ExtAudioFileSetProperty ( audioFileObject, kExtAudioFileProperty_ClientDataFormat, sizeof (importFormat), &importFormat ); if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;} // Set up an AudioBufferList struct, which has two roles: // // 1. It gives the ExtAudioFileRead function the configuration it // needs to correctly provide the data to the buffer. // // 2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so // that audio data obtained from disk using the ExtAudioFileRead function // goes to that buffer // Allocate memory for the buffer list struct according to the number of // channels it represents. AudioBufferList *bufferList; bufferList = (AudioBufferList *) malloc ( sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1) ); if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;} // initialize the mNumberBuffers member bufferList->mNumberBuffers = channelCount; // initialize the mBuffers member to 0 AudioBuffer emptyBuffer = {0}; size_t arrayIndex; for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) { bufferList->mBuffers[arrayIndex] = emptyBuffer; } // set up the AudioBuffer structs in the buffer list bufferList->mBuffers[0].mNumberChannels = 1; bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft; if (2 == channelCount) { bufferList->mBuffers[1].mNumberChannels = 1; bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType); bufferList->mBuffers[1].mData = soundStructArray[audioFile].audioDataRight; } // Perform a synchronous, sequential read of the audio data out of the file and // into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members. UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile; result = ExtAudioFileRead ( audioFileObject, &numberOfPacketsToRead, bufferList ); free (bufferList); if (noErr != result) { [self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result]; // If reading from the file failed, then free the memory for the sound buffer. free (soundStructArray[audioFile].audioDataLeft); soundStructArray[audioFile].audioDataLeft = 0; if (2 == channelCount) { free (soundStructArray[audioFile].audioDataRight); soundStructArray[audioFile].audioDataRight = 0; } ExtAudioFileDispose (audioFileObject); return; } NSLog (@"Finished reading file %i into memory", audioFile); // Set the sample index to zero, so that playback starts at the // beginning of the sound. soundStructArray[audioFile].sampleNumber = 0; // Dispose of the extended audio file object, which also // closes the associated file. ExtAudioFileDispose (audioFileObject); } }
Which part contains the array of audio samples which have to be reversed? Is it the
AudioUnitSampleType
?bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;
Note: audioDataLeft is defined as an
AudioUnitSampleType
, which is an SInt32 but not an array.I found a clue in a Core Audio Mailing list:
Well, nothing to do with iPh*n* as far as I know (unless some audio API has been omitted -- I am not a member of that program). AFAIR, AudioFile.h and ExtendedAudioFile.h should provide you with what you need to read or write a caf and access its streams/channels. Basically, you want to read each channel/stream backwards, so, if you don't need properties of the audio file it is pretty straightforward once you have a handle on that channel's data, assuming it is not in a compressed format. Considering the number of formats a caf can represent, this could take a few more lines of code than you're thinking. Once you have a handle on uncompressed data, it should be about as easy as reversing a string. Then you would of course replace the file's data with the reversed data, or you could just feed the audio output (or wherever you're sending the reversed signal) reading whatever stream you have backwards.
This is what I tried, but when I assign my reversed buffer to the mData of both channels, I hear nothing:
AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft; AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType)); UInt64 j = 0; for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) { reversedData[j] = leftData[i]; j++; }
原文:https://stackoverflow.com/questions/12027003
最满意答案
您可以使用Custom SoapExtension 。 在ProcessMessage()方法中,您可以看到4个事件,序列化之前和之后以及反序列化之前和之后。 在Befores中启动秒表,在Afters上结束,然后将时间存储在某个地方。
You could use a Custom SoapExtension. In the ProcessMessage() method, you can see 4 events, Before and After serialization and Before and After deserialization. Start a stopwatch in the Befores, end it on the Afters, then store the time away somewhere.
相关问答
更多-
下列中不属于面向对象的编程语言的是?[2022-05-30]
a -
如何在.net Web服务中序列化可为空的DateTime?(How to serialize nullable DateTime in a .net web service?)[2021-12-27]
处理可选属性的方法是包含一个布尔值XXXSpecified成员,其中XXX是属性的名称。 ASMX使用的xml序列化程序无法正确处理可空类型。 请注意,这不是WCF的限制。 The way you handle optional properties is to include a boolean XXXSpecified member where XXX is the name of the property. Nullable types are not handled properly by the ... -
使用适配器模式将特定于格式的序列化代码分解为单独的类。 您的问题域类成为支持商店中立。 在应用层上使用特定于关系数据库的适配器类,以将对象与数据层进行序列化。 在Web层上使用HTML特定的适配器类来将对象与Web浏览器进行序列化。 在Web层和应用层上使用XML(或您认为最合适的任何有线协议友好格式)特定适配器类来序列化Web层和应用层之间的对象。 如果你足够聪明地弄清楚如何使这些适配器类足够通用,这样你就不需要为每个问题域类提供一组不同的适配器类。 Factor out format specific ...
-
您可以使用Custom SoapExtension 。 在ProcessMessage()方法中,您可以看到4个事件,序列化之前和之后以及反序列化之前和之后。 在Befores中启动秒表,在Afters上结束,然后将时间存储在某个地方。 You could use a Custom SoapExtension. In the ProcessMessage() method, you can see 4 events, Before and After serialization and Before and ...
-
问题出在发布方法上。 我用了 Endpoint.publish("http://192.168.1.103:8080/WS/HelloWorld3",new HelloWorld2Impl()); 这种方法响应很慢。 解决方案是使用eclipse创建WebService发布者的向导。 该向导将添加一些文件,允许发布服务器启动的web服务。 The problem was in the publishing method. I used Endpoint.publish("http://192.168. ...
-
任何标有SerializableAttribute属性的对象都可以被序列化(在大多数情况下)。 序列化的结果总是定向到一个流,这很可能是一个文件输出流。 你在问为什么一个对象图不能被部分反序列化? 只有.NET序列化[de]序列化完整的对象图。 否则,你将不得不转向其他序列化格式化器,或者写你自己的。 要直接随机访问文件,您必须使用支持查找的流来打开该文件。 编辑: 从序列化中搜索结果流没有实际的目的 - 只有序列格式化程序知道那里有什么,并且应该始终在流的开始处提供。 将数据保存到其他结构中; 分两步进行 ...
-
您需要使用Name成员上的XmlTextAttribute将其视为XML元素的文本: [XmlType(TypeName = "location")] public class Location { [XmlText()] public string Name { get; set; } } You need to use the XmlTextAttribute on the Name member to treat it as the text of an XML element: [ ...
-
从Web服务(内存不足)反序列化大型JSON对象(Deserializing large JSON Objects from Web Service (Out of Memory))[2022-05-30]
HttpClient client = new HttpClient(); using (Stream s = client.GetStreamAsync("http://www.test.com/large.json").Result) using (StreamReader sr = new StreamReader(s)) using (JsonReader reader = new JsonTextReader(sr)) { JsonSerializer serializer = new ... -
如果除了使用旧的ASMX技术(WSDL.EXE,“添加Web引用”)之外别无选择,那么您应该使用SoapExtension来记录您的服务请求和响应。 If you have no choice other than to use the legacy ASMX technology (WSDL.EXE, "Add Web Reference"), then you should use a SoapExtension to log your service requests and responses.
-
如何通过WCF Web服务序列化和发送MailMessage?(How can serialize and send a MailMessage over a WCF Web Service?)[2022-12-01]
我不得不制作一个Mailizeable版本的MailMessage类,并为它的每个属性(例如MailAddress,Attachment ...)制作Serializeable类。 幸运的是,我发现这个免费的开源代码(通过烧冰的复合C1贡献)已经完成了所有这些: 源代码 许可详情 I had to make a Serializeable version of the MailMessage class and make Serializeable classes for each of it's prop ...