首页 \ 问答 \ Apache Tika入门?(Getting started with Apache Tika?)

Apache Tika入门?(Getting started with Apache Tika?)

我想编写一个使用Apache Tika下载网页文本内容的Java网络爬虫,但我是使用Apache项目的新手,我还没有找到明确的来源,阐明如何将Tika整合到程序中。 从我从互联网上收集到的内容,我已经在命令行中使用Maven构建了Tika,但是我不知道从哪里开始在我的Java程序中使用Tars类(?),如Parser等。 我正在使用Eclipse,如果这有所不同 - 我还为Eclipse安装了Maven插件,但我不确定如何处理它...我需要一个“导入...”行吗? 请原谅我的“初学者”问题,但是我们将非常感谢准备使用Tika的分步指南。


I would like to program a Java web crawler that uses Apache Tika to download webpage textual content, but I'm a newbie to using Apache projects and I haven't found a definitive source that clarifies how to integrate Tika into programs, exactly. From what I've gathered from the Internet, I have built Tika with Maven in command line, but I'm not sure where to go from here to use Tika classes(?) like Parser, etc in my Java programs. I'm using Eclipse, if that makes a difference - I've also installed the Maven plugin for Eclipse but I'm not exactly sure what to do with it...Do I need to an "import..." line? Please excuse my "beginner" questions but a step-by-step guide to preparing Tika to be used would be appreciated.


原文:https://stackoverflow.com/questions/17821895
更新时间:2023-06-23 13:06

最满意答案

不知道我是否直接回答你的问题。 但是一种可能的解决方案是在新数据来临时手动输出音频设备(推送模式)。

您还可以使用自定义(QFile继承)类录制声音,并在声音来临时提供文件和输出音频设备。

这里是一个例子:

AudioOutput.h:

#ifndef AUDIOOUTPUT_H
#define AUDIOOUTPUT_H

#include <QtCore>
#include <QtMultimedia>

#define MAX_BUFFERED_TIME 10*1000

static inline int timeToSize(int ms, const QAudioFormat &format)
{
    return ((format.channelCount() * (format.sampleSize() / 8) * format.sampleRate()) * ms / 1000);
}

class AudioOutput : public QObject
{
    Q_OBJECT
public:
    explicit AudioOutput(QObject *parent = nullptr);

public slots:
    bool start(const QAudioDeviceInfo &devinfo,
               const QAudioFormat &format,
               int time_to_buffer);

    void write(const QByteArray &data);

private slots:
    void verifyBuffer();
    void preplay();
    void play();

private:
    bool m_initialized;
    QAudioOutput *m_audio_output;
    QIODevice *m_device;
    QByteArray m_buffer;
    bool m_buffer_requested;
    bool m_play_called;
    int m_size_to_buffer;
    int m_time_to_buffer;
    int m_max_size_to_buffer;
    QAudioFormat m_format;
};

#endif // AUDIOOUTPUT_H

AudioRecorder.h:

#ifndef AUDIORECORDER_H
#define AUDIORECORDER_H

#include <QtCore>
#include <QtMultimedia>

class AudioRecorder : public QFile
{
    Q_OBJECT
public:
    explicit AudioRecorder(const QString &name, const QAudioFormat &format, QObject *parent = nullptr);
    ~AudioRecorder();

    using QFile::open;

public slots:
    bool open();
    qint64 write(const QByteArray &data);
    void close();

private:
    void writeHeader();
    bool hasSupportedFormat();
    QAudioFormat format;
};

#endif // AUDIORECORDER_H

AudioOutput.cpp:

#include "audiooutput.h"

AudioOutput::AudioOutput(QObject *parent) : QObject(parent)
{
    m_initialized = false;
    m_audio_output = nullptr;
    m_device = nullptr;
    m_buffer_requested = true;
    m_play_called = false;
    m_size_to_buffer = 0;
    m_time_to_buffer = 0;
    m_max_size_to_buffer = 0;
}

bool AudioOutput::start(const QAudioDeviceInfo &devinfo,
                        const QAudioFormat &format,
                        int time_to_buffer)
{
    if (!devinfo.isFormatSupported(format))
    {
        qDebug() << "Format not supported by output device";
        return m_initialized;
    }

    m_format = format;

    int internal_buffer_size;

    //Adjust internal buffer size
    if (format.sampleRate() >= 44100)
        internal_buffer_size = (1024 * 10) * format.channelCount();
    else if (format.sampleRate() >= 24000)
        internal_buffer_size = (1024 * 6) * format.channelCount();
    else
        internal_buffer_size = (1024 * 4) * format.channelCount();

    //Initialize the audio output device
    m_audio_output = new QAudioOutput(devinfo, format, this);
    //Increase the buffer size to enable higher sample rates
    m_audio_output->setBufferSize(internal_buffer_size);

    m_time_to_buffer = time_to_buffer;
    //Compute the size in bytes to be buffered based on the current format
    m_size_to_buffer = timeToSize(m_time_to_buffer, m_format);
    //Define a highest size that the buffer are allowed to have in the given time
    //This value is used to discard too old buffered data
    m_max_size_to_buffer = m_size_to_buffer + timeToSize(MAX_BUFFERED_TIME, m_format);

    m_device = m_audio_output->start();

    if (!m_device)
    {
        qDebug() << "Failed to open output audio device";
        return m_initialized;
    }

    //Timer that helps to keep playing data while it's available on the internal buffer
    QTimer *timer_play = new QTimer(this);
    timer_play->setTimerType(Qt::PreciseTimer);
    connect(timer_play, &QTimer::timeout, this, &AudioOutput::preplay);
    timer_play->start(10);

    //Timer that checks for too old data in the buffer
    QTimer *timer_verifier = new QTimer(this);
    connect(timer_verifier, &QTimer::timeout, this, &AudioOutput::verifyBuffer);
    timer_verifier->start(qMax(m_time_to_buffer, 10));

    m_initialized = true;

    return m_initialized;
}

void AudioOutput::verifyBuffer()
{
    if (m_buffer.size() >= m_max_size_to_buffer)
        m_buffer.clear();
}

void AudioOutput::write(const QByteArray &data)
{
    m_buffer.append(data);
    preplay();
}

void AudioOutput::preplay()
{
    if (!m_initialized)
        return;

    //Verify if exists a pending call to play function
    //If not, call the play function async
    if (!m_play_called)
    {
        m_play_called = true;
        QMetaObject::invokeMethod(this, "play", Qt::QueuedConnection);
    }
}

void AudioOutput::play()
{
    //Set that last async call was triggered
    m_play_called = false;

    if (m_buffer.isEmpty())
    {
        //If data is empty set that nothing should be played
        //until the buffer has at least the minimum buffered size already set
        m_buffer_requested = true;
        return;
    }
    else if (m_buffer.size() < m_size_to_buffer)
    {
        //If buffer doesn't contains enough data,
        //check if exists a already flag telling that the buffer comes
        //from a empty state and should not play anything until have the minimum data size
        if (m_buffer_requested)
            return;
    }
    else
    {
        //Buffer is ready and data can be played
        m_buffer_requested = false;
    }

    int readlen = m_audio_output->periodSize();

    int chunks = m_audio_output->bytesFree() / readlen;

    //Play data while it's available in the output device
    while (chunks)
    {
        //Get chunk from the buffer
        QByteArray samples = m_buffer.mid(0, readlen);
        int len = samples.size();
        m_buffer.remove(0, len);

        //Write data to the output device
        if (len)
            m_device->write(samples);

        //If chunk is smaller than the output chunk size, exit loop
        if (len != readlen)
            break;

        //Decrease the available number of chunks
        chunks--;
    }
}

AudioRecorder.cpp:

#include "audiorecorder.h"

AudioRecorder::AudioRecorder(const QString &name, const QAudioFormat &format, QObject *parent) : QFile(name, parent), format(format)
{

}

AudioRecorder::~AudioRecorder()
{
    if (!isOpen())
        return;

    close();
}

bool AudioRecorder::hasSupportedFormat()
{
    return (format.sampleSize() == 8
            && format.sampleType() == QAudioFormat::UnSignedInt)
            || (format.sampleSize() > 8
                && format.sampleType() == QAudioFormat::SignedInt
                && format.byteOrder() == QAudioFormat::LittleEndian);
}

bool AudioRecorder::open()
{
    if (!hasSupportedFormat())
    {
        setErrorString("Wav PCM supports only 8-bit unsigned samples "
                       "or 16-bit (or more) signed samples (in little endian)");
        return false;
    }
    else
    {
        if (!QFile::open(ReadWrite | Truncate))
            return false;
        writeHeader();
        return true;
    }
}

qint64 AudioRecorder::write(const QByteArray &data)
{
    return QFile::write(data);
}

void AudioRecorder::writeHeader()
{
    QDataStream out(this);
    out.setByteOrder(QDataStream::LittleEndian);

    // RIFF chunk
    out.writeRawData("RIFF", 4);
    out << quint32(0); // Placeholder for the RIFF chunk size (filled by close())
    out.writeRawData("WAVE", 4);

    // Format description chunk
    out.writeRawData("fmt ", 4);
    out << quint32(16); // "fmt " chunk size (always 16 for PCM)
    out << quint16(1); // data format (1 => PCM)
    out << quint16(format.channelCount());
    out << quint32(format.sampleRate());
    out << quint32(format.sampleRate() * format.channelCount()
                   * format.sampleSize() / 8 ); // bytes per second
    out << quint16(format.channelCount() * format.sampleSize() / 8); // Block align
    out << quint16(format.sampleSize()); // Significant Bits Per Sample

    // Data chunk
    out.writeRawData("data", 4);
    out << quint32(0); // Placeholder for the data chunk size (filled by close())

    Q_ASSERT(pos() == 44); // Must be 44 for WAV PCM
}

void AudioRecorder::close()
{
    // Fill the header size placeholders
    quint32 fileSize = size();

    QDataStream out(this);
    // Set the same ByteOrder like in writeHeader()
    out.setByteOrder(QDataStream::LittleEndian);
    // RIFF chunk size
    seek(4);
    out << quint32(fileSize - 8);

    // data chunk size
    seek(40);
    out << quint32(fileSize - 44);

    QFile::close();
}

main.cpp中:

#include <QtCore>
#include "audiooutput.h"
#include "audiorecorder.h"
#include <signal.h>

QByteArray tone_generator()
{
    //Tone generator from http://www.cplusplus.com/forum/general/129827/

    const unsigned int samplerate = 8000;
    const unsigned short channels = 1;

    const double pi = M_PI;
    const qint16 amplitude = std::numeric_limits<qint16>::max() * 0.5;

    const unsigned short n_frequencies = 8;
    const unsigned short n_seconds_each = 1;

    float frequencies[n_frequencies] = {55.0, 110.0, 220.0, 440.0, 880.0, 1760.0, 3520.0, 7040.0};

    const int n_samples = channels * samplerate * n_frequencies * n_seconds_each;

    QVector<qint16> data;
    data.resize(n_samples);

    int index = n_samples / n_frequencies;

    for (unsigned short i = 0; i < n_frequencies; ++i)
    {
        float freq = frequencies[i];
        double d = (samplerate / freq);
        int c = 0;

        for (int j = index * i; j < index * (i + 1); j += 2)
        {
            double deg = 360.0 / d;
            data[j] = data[j + (channels - 1)] = qSin((c++ * deg) * pi / 180.0) * amplitude;
        }
    }

    return QByteArray((char*)data.data(), data.size() * sizeof(qint16));
}

void signalHandler(int signum)
{
    qDebug().nospace() << "Interrupt signal (" << signum << ") received.";

    qApp->exit();
}

int main(int argc, char *argv[])
{
    //Handle console close to ensure destructors being called
#ifdef Q_OS_WIN
    signal(SIGBREAK, signalHandler);
#else
    signal(SIGHUP, signalHandler);
#endif
    signal(SIGINT, signalHandler);

    QCoreApplication a(argc, argv);

    QAudioFormat format;
    format.setSampleRate(8000);
    format.setChannelCount(1);
    format.setSampleSize(16);
    format.setCodec("audio/pcm");
    format.setByteOrder(QAudioFormat::LittleEndian);
    format.setSampleType(QAudioFormat::SignedInt);

    AudioOutput output;

    AudioRecorder file("tone.wav", format);

    if (!output.start(QAudioDeviceInfo::defaultOutputDevice(), format, 10 * 1000)) //10 seconds of buffer
        return a.exec();

    if (!file.open())
    {
        qDebug() << qPrintable(file.errorString());
        return a.exec();
    }

    qDebug() << "Started!";

    QByteArray audio_data = tone_generator();

    QTimer timer;

    QObject::connect(&timer, &QTimer::timeout, [&]{
        qDebug() << "Writting" << audio_data.size() << "bytes";
        output.write(audio_data);
        file.write(audio_data);
    });

    qDebug() << "Writting" << audio_data.size() << "bytes";
    output.write(audio_data);
    file.write(audio_data);

    timer.start(8000); //8 seconds because we generated 8 seconds of sound

    return a.exec();
}

Not sure if i'm directly answering your question. But a possible solution is feed the output audio device manually (push mode) when new data comes.

You can also use a custom (QFile inherited) class to record sound, and when sound come, feeds both the file and output audio device.

Here is a example:

AudioOutput.h:

#ifndef AUDIOOUTPUT_H
#define AUDIOOUTPUT_H

#include <QtCore>
#include <QtMultimedia>

#define MAX_BUFFERED_TIME 10*1000

static inline int timeToSize(int ms, const QAudioFormat &format)
{
    return ((format.channelCount() * (format.sampleSize() / 8) * format.sampleRate()) * ms / 1000);
}

class AudioOutput : public QObject
{
    Q_OBJECT
public:
    explicit AudioOutput(QObject *parent = nullptr);

public slots:
    bool start(const QAudioDeviceInfo &devinfo,
               const QAudioFormat &format,
               int time_to_buffer);

    void write(const QByteArray &data);

private slots:
    void verifyBuffer();
    void preplay();
    void play();

private:
    bool m_initialized;
    QAudioOutput *m_audio_output;
    QIODevice *m_device;
    QByteArray m_buffer;
    bool m_buffer_requested;
    bool m_play_called;
    int m_size_to_buffer;
    int m_time_to_buffer;
    int m_max_size_to_buffer;
    QAudioFormat m_format;
};

#endif // AUDIOOUTPUT_H

AudioRecorder.h:

#ifndef AUDIORECORDER_H
#define AUDIORECORDER_H

#include <QtCore>
#include <QtMultimedia>

class AudioRecorder : public QFile
{
    Q_OBJECT
public:
    explicit AudioRecorder(const QString &name, const QAudioFormat &format, QObject *parent = nullptr);
    ~AudioRecorder();

    using QFile::open;

public slots:
    bool open();
    qint64 write(const QByteArray &data);
    void close();

private:
    void writeHeader();
    bool hasSupportedFormat();
    QAudioFormat format;
};

#endif // AUDIORECORDER_H

AudioOutput.cpp:

#include "audiooutput.h"

AudioOutput::AudioOutput(QObject *parent) : QObject(parent)
{
    m_initialized = false;
    m_audio_output = nullptr;
    m_device = nullptr;
    m_buffer_requested = true;
    m_play_called = false;
    m_size_to_buffer = 0;
    m_time_to_buffer = 0;
    m_max_size_to_buffer = 0;
}

bool AudioOutput::start(const QAudioDeviceInfo &devinfo,
                        const QAudioFormat &format,
                        int time_to_buffer)
{
    if (!devinfo.isFormatSupported(format))
    {
        qDebug() << "Format not supported by output device";
        return m_initialized;
    }

    m_format = format;

    int internal_buffer_size;

    //Adjust internal buffer size
    if (format.sampleRate() >= 44100)
        internal_buffer_size = (1024 * 10) * format.channelCount();
    else if (format.sampleRate() >= 24000)
        internal_buffer_size = (1024 * 6) * format.channelCount();
    else
        internal_buffer_size = (1024 * 4) * format.channelCount();

    //Initialize the audio output device
    m_audio_output = new QAudioOutput(devinfo, format, this);
    //Increase the buffer size to enable higher sample rates
    m_audio_output->setBufferSize(internal_buffer_size);

    m_time_to_buffer = time_to_buffer;
    //Compute the size in bytes to be buffered based on the current format
    m_size_to_buffer = timeToSize(m_time_to_buffer, m_format);
    //Define a highest size that the buffer are allowed to have in the given time
    //This value is used to discard too old buffered data
    m_max_size_to_buffer = m_size_to_buffer + timeToSize(MAX_BUFFERED_TIME, m_format);

    m_device = m_audio_output->start();

    if (!m_device)
    {
        qDebug() << "Failed to open output audio device";
        return m_initialized;
    }

    //Timer that helps to keep playing data while it's available on the internal buffer
    QTimer *timer_play = new QTimer(this);
    timer_play->setTimerType(Qt::PreciseTimer);
    connect(timer_play, &QTimer::timeout, this, &AudioOutput::preplay);
    timer_play->start(10);

    //Timer that checks for too old data in the buffer
    QTimer *timer_verifier = new QTimer(this);
    connect(timer_verifier, &QTimer::timeout, this, &AudioOutput::verifyBuffer);
    timer_verifier->start(qMax(m_time_to_buffer, 10));

    m_initialized = true;

    return m_initialized;
}

void AudioOutput::verifyBuffer()
{
    if (m_buffer.size() >= m_max_size_to_buffer)
        m_buffer.clear();
}

void AudioOutput::write(const QByteArray &data)
{
    m_buffer.append(data);
    preplay();
}

void AudioOutput::preplay()
{
    if (!m_initialized)
        return;

    //Verify if exists a pending call to play function
    //If not, call the play function async
    if (!m_play_called)
    {
        m_play_called = true;
        QMetaObject::invokeMethod(this, "play", Qt::QueuedConnection);
    }
}

void AudioOutput::play()
{
    //Set that last async call was triggered
    m_play_called = false;

    if (m_buffer.isEmpty())
    {
        //If data is empty set that nothing should be played
        //until the buffer has at least the minimum buffered size already set
        m_buffer_requested = true;
        return;
    }
    else if (m_buffer.size() < m_size_to_buffer)
    {
        //If buffer doesn't contains enough data,
        //check if exists a already flag telling that the buffer comes
        //from a empty state and should not play anything until have the minimum data size
        if (m_buffer_requested)
            return;
    }
    else
    {
        //Buffer is ready and data can be played
        m_buffer_requested = false;
    }

    int readlen = m_audio_output->periodSize();

    int chunks = m_audio_output->bytesFree() / readlen;

    //Play data while it's available in the output device
    while (chunks)
    {
        //Get chunk from the buffer
        QByteArray samples = m_buffer.mid(0, readlen);
        int len = samples.size();
        m_buffer.remove(0, len);

        //Write data to the output device
        if (len)
            m_device->write(samples);

        //If chunk is smaller than the output chunk size, exit loop
        if (len != readlen)
            break;

        //Decrease the available number of chunks
        chunks--;
    }
}

AudioRecorder.cpp:

#include "audiorecorder.h"

AudioRecorder::AudioRecorder(const QString &name, const QAudioFormat &format, QObject *parent) : QFile(name, parent), format(format)
{

}

AudioRecorder::~AudioRecorder()
{
    if (!isOpen())
        return;

    close();
}

bool AudioRecorder::hasSupportedFormat()
{
    return (format.sampleSize() == 8
            && format.sampleType() == QAudioFormat::UnSignedInt)
            || (format.sampleSize() > 8
                && format.sampleType() == QAudioFormat::SignedInt
                && format.byteOrder() == QAudioFormat::LittleEndian);
}

bool AudioRecorder::open()
{
    if (!hasSupportedFormat())
    {
        setErrorString("Wav PCM supports only 8-bit unsigned samples "
                       "or 16-bit (or more) signed samples (in little endian)");
        return false;
    }
    else
    {
        if (!QFile::open(ReadWrite | Truncate))
            return false;
        writeHeader();
        return true;
    }
}

qint64 AudioRecorder::write(const QByteArray &data)
{
    return QFile::write(data);
}

void AudioRecorder::writeHeader()
{
    QDataStream out(this);
    out.setByteOrder(QDataStream::LittleEndian);

    // RIFF chunk
    out.writeRawData("RIFF", 4);
    out << quint32(0); // Placeholder for the RIFF chunk size (filled by close())
    out.writeRawData("WAVE", 4);

    // Format description chunk
    out.writeRawData("fmt ", 4);
    out << quint32(16); // "fmt " chunk size (always 16 for PCM)
    out << quint16(1); // data format (1 => PCM)
    out << quint16(format.channelCount());
    out << quint32(format.sampleRate());
    out << quint32(format.sampleRate() * format.channelCount()
                   * format.sampleSize() / 8 ); // bytes per second
    out << quint16(format.channelCount() * format.sampleSize() / 8); // Block align
    out << quint16(format.sampleSize()); // Significant Bits Per Sample

    // Data chunk
    out.writeRawData("data", 4);
    out << quint32(0); // Placeholder for the data chunk size (filled by close())

    Q_ASSERT(pos() == 44); // Must be 44 for WAV PCM
}

void AudioRecorder::close()
{
    // Fill the header size placeholders
    quint32 fileSize = size();

    QDataStream out(this);
    // Set the same ByteOrder like in writeHeader()
    out.setByteOrder(QDataStream::LittleEndian);
    // RIFF chunk size
    seek(4);
    out << quint32(fileSize - 8);

    // data chunk size
    seek(40);
    out << quint32(fileSize - 44);

    QFile::close();
}

main.cpp:

#include <QtCore>
#include "audiooutput.h"
#include "audiorecorder.h"
#include <signal.h>

QByteArray tone_generator()
{
    //Tone generator from http://www.cplusplus.com/forum/general/129827/

    const unsigned int samplerate = 8000;
    const unsigned short channels = 1;

    const double pi = M_PI;
    const qint16 amplitude = std::numeric_limits<qint16>::max() * 0.5;

    const unsigned short n_frequencies = 8;
    const unsigned short n_seconds_each = 1;

    float frequencies[n_frequencies] = {55.0, 110.0, 220.0, 440.0, 880.0, 1760.0, 3520.0, 7040.0};

    const int n_samples = channels * samplerate * n_frequencies * n_seconds_each;

    QVector<qint16> data;
    data.resize(n_samples);

    int index = n_samples / n_frequencies;

    for (unsigned short i = 0; i < n_frequencies; ++i)
    {
        float freq = frequencies[i];
        double d = (samplerate / freq);
        int c = 0;

        for (int j = index * i; j < index * (i + 1); j += 2)
        {
            double deg = 360.0 / d;
            data[j] = data[j + (channels - 1)] = qSin((c++ * deg) * pi / 180.0) * amplitude;
        }
    }

    return QByteArray((char*)data.data(), data.size() * sizeof(qint16));
}

void signalHandler(int signum)
{
    qDebug().nospace() << "Interrupt signal (" << signum << ") received.";

    qApp->exit();
}

int main(int argc, char *argv[])
{
    //Handle console close to ensure destructors being called
#ifdef Q_OS_WIN
    signal(SIGBREAK, signalHandler);
#else
    signal(SIGHUP, signalHandler);
#endif
    signal(SIGINT, signalHandler);

    QCoreApplication a(argc, argv);

    QAudioFormat format;
    format.setSampleRate(8000);
    format.setChannelCount(1);
    format.setSampleSize(16);
    format.setCodec("audio/pcm");
    format.setByteOrder(QAudioFormat::LittleEndian);
    format.setSampleType(QAudioFormat::SignedInt);

    AudioOutput output;

    AudioRecorder file("tone.wav", format);

    if (!output.start(QAudioDeviceInfo::defaultOutputDevice(), format, 10 * 1000)) //10 seconds of buffer
        return a.exec();

    if (!file.open())
    {
        qDebug() << qPrintable(file.errorString());
        return a.exec();
    }

    qDebug() << "Started!";

    QByteArray audio_data = tone_generator();

    QTimer timer;

    QObject::connect(&timer, &QTimer::timeout, [&]{
        qDebug() << "Writting" << audio_data.size() << "bytes";
        output.write(audio_data);
        file.write(audio_data);
    });

    qDebug() << "Writting" << audio_data.size() << "bytes";
    output.write(audio_data);
    file.write(audio_data);

    timer.start(8000); //8 seconds because we generated 8 seconds of sound

    return a.exec();
}

相关问答

更多
  • 如果你在讨论spark中的RDD[Array[Array[Int]]] ,它等同于scala中的Array[Array[Array[Int]]] ,那么你可以执行以下操作 假设你有一个文本文件(/home/test.csv) 0,1,2 7,8,9 18,19,5 你可以做 scala> val data = sc.textFile("/home/test.csv") data: org.apache.spark.rdd.RDD[String] = /home/test.csv MapPartitions ...
  • firstPlot和secondPlot将具有所需的值。 var chartName = ['bearShevaCity', 'haifaCity', 'tiberiasCity', 'kfarSabaCity', 'netanyaCity', 'rishonLezionCity', 'rehovotCity', 'telAvivCity']; var plotName = [['bearShevaPlot', 'bearSheva'], ['haifaPlot', 'haifa'], ['tiberias ...
  • 不知道我是否直接回答你的问题。 但是一种可能的解决方案是在新数据来临时手动输出音频设备(推送模式)。 您还可以使用自定义(QFile继承)类录制声音,并在声音来临时提供文件和输出音频设备。 这里是一个例子: AudioOutput.h: #ifndef AUDIOOUTPUT_H #define AUDIOOUTPUT_H #include #include #define MAX_BUFFERED_TIME 10*1000 static inline ...
  • 在D中,数组本质上是一个带有指针和长度字段的结构,并且被视为这样 将地址获取到第一个元素,您可以查询ptr字段 in D an array is essentially a struct with a pointer and a length field and is treated as such to get the address to the first element you can query the ptr field
  • 你可以形成这样的结构: arr = [{FieldId:fid_value, Labels:[{labelId:lid_value, labelValue:label_text}]}] 基本上,一个带有对象的数组。 每个对象包含两个字段:字段ID和标签。 标签也是一个包含对象的数组。 每个对象都有标签ID和标签值属性。 创建新项目的代码可能如下所示: arr = array(); fieldObj = {FieldId:fid_value, Labels:[]}; fieldObj.Labels.push ...
  • 如果我正确理解你,你想看看第一个数组的条目是否存在于第二个数组中。 你可能会看一下array_intersect ,它会返回你传递给它的所有数组中存在的一些东西。 $common = array_intersect($this->_acceptable, $location_split); if (count($common)) { echo 'found'; } 如果该数组的计数至少为1,那么至少有一个共同的元素。 如果它等于你的动态数组的长度,并且数组的值是不同的,那么它们都在那里。 当然,数 ...
  • 通过这种方式,您将获得预期的输出: var fileName = [AnyObject]() var allFiles = [AnyObject]() for item in data{ let keyString = item.characters.last if keyString != "/"{ fileName.append(item) } else if keyString == "/"{ if fileName.count > 0{ all ...
  • 您有一个包含JSON字符串的数组(称为$object )。 循环,解码和循环。 循环中解码的JSON string的结构如下: Array ( [0] => Array ( [name] => agg_e_txt [value] => Gutentag. Dies ist ein test! ) [1] => Array ( [name] => agg_e_rec ...
  • 要访问b1 ,请执行以下操作: print array[1][0] 看一个例子: >>> array=[['a1','a2','a3','a4'],['b1','b2','b3','b4'],['c1','c2','c3','c4']] >>> array[1] ['b1', 'b2', 'b3', 'b4'] >>> array[1][0] 'b1' >>> 基本上,您在第1位索引array (返回b列表),然后在位置0(返回b1 )索引该列表。 To access b1, do this: prin ...

相关文章

更多

最新问答

更多
  • 您如何使用git diff文件,并将其应用于同一存储库的副本的本地分支?(How do you take a git diff file, and apply it to a local branch that is a copy of the same repository?)
  • 将长浮点值剪切为2个小数点并复制到字符数组(Cut Long Float Value to 2 decimal points and copy to Character Array)
  • OctoberCMS侧边栏不呈现(OctoberCMS Sidebar not rendering)
  • 页面加载后对象是否有资格进行垃圾回收?(Are objects eligible for garbage collection after the page loads?)
  • codeigniter中的语言不能按预期工作(language in codeigniter doesn' t work as expected)
  • 在计算机拍照在哪里进入
  • 使用cin.get()从c ++中的输入流中丢弃不需要的字符(Using cin.get() to discard unwanted characters from the input stream in c++)
  • No for循环将在for循环中运行。(No for loop will run inside for loop. Testing for primes)
  • 单页应用程序:页面重新加载(Single Page Application: page reload)
  • 在循环中选择具有相似模式的列名称(Selecting Column Name With Similar Pattern in a Loop)
  • System.StackOverflow错误(System.StackOverflow error)
  • KnockoutJS未在嵌套模板上应用beforeRemove和afterAdd(KnockoutJS not applying beforeRemove and afterAdd on nested templates)
  • 散列包括方法和/或嵌套属性(Hash include methods and/or nested attributes)
  • android - 如何避免使用Samsung RFS文件系统延迟/冻结?(android - how to avoid lag/freezes with Samsung RFS filesystem?)
  • TensorFlow:基于索引列表创建新张量(TensorFlow: Create a new tensor based on list of indices)
  • 企业安全培训的各项内容
  • 错误:RPC失败;(error: RPC failed; curl transfer closed with outstanding read data remaining)
  • C#类名中允许哪些字符?(What characters are allowed in C# class name?)
  • NumPy:将int64值存储在np.array中并使用dtype float64并将其转换回整数是否安全?(NumPy: Is it safe to store an int64 value in an np.array with dtype float64 and later convert it back to integer?)
  • 注销后如何隐藏导航portlet?(How to hide navigation portlet after logout?)
  • 将多个行和可变行移动到列(moving multiple and variable rows to columns)
  • 提交表单时忽略基础href,而不使用Javascript(ignore base href when submitting form, without using Javascript)
  • 对setOnInfoWindowClickListener的意图(Intent on setOnInfoWindowClickListener)
  • Angular $资源不会改变方法(Angular $resource doesn't change method)
  • 在Angular 5中不是一个函数(is not a function in Angular 5)
  • 如何配置Composite C1以将.m和桌面作为同一站点提供服务(How to configure Composite C1 to serve .m and desktop as the same site)
  • 不适用:悬停在悬停时:在元素之前[复制](Don't apply :hover when hovering on :before element [duplicate])
  • 常见的python rpc和cli接口(Common python rpc and cli interface)
  • Mysql DB单个字段匹配多个其他字段(Mysql DB single field matching to multiple other fields)
  • 产品页面上的Magento Up出售对齐问题(Magento Up sell alignment issue on the products page)