首页 \ 问答 \ jQuery:找一个元素(jQuery: Finding an element)

jQuery:找一个元素(jQuery: Finding an element)

我遇到这个小东西有困难。 我有一个代码如下:

<table>.........
    <td>
        <span class="editable" id="name"><?= $name ?></span>
        <img class="edit_field" src="icons/edit.png"/>
    </td>
....... 
</table>

我想获取span的id,以便我可以更改它(例如在其中添加一个输入框)。

我有这个:

$('.edit_field').live('click', function() {
    var td = $(this).parent();
    var span = td.find(".editable");
    alert(span.id)
});

但它不起作用......有什么想法吗? (我试着这样做,所以它可以重复使用)


I'm having trouble with this little thing. I have a code that looks like this:

<table>.........
    <td>
        <span class="editable" id="name"><?= $name ?></span>
        <img class="edit_field" src="icons/edit.png"/>
    </td>
....... 
</table>

I want to obtain the id of the span, so that I can then change it (for example add an input box in it).

I have this:

$('.edit_field').live('click', function() {
    var td = $(this).parent();
    var span = td.find(".editable");
    alert(span.id)
});

But it's not working... Any ideas? (I'm trying to do it like this so it's reusable)


原文:https://stackoverflow.com/questions/11546741
更新时间:2022-04-03 18:04

最满意答案

DataGenerators的想法是给fit_generator批量处理数据流..因此可以控制您如何生成数据,即从文件加载还是像ImageDataGenerator所做的那样进行数据增强。

在这里,我发布了自定义DataGenerator的mniset siamese示例的修改版本,您可以从这里开始。

import numpy as np
np.random.seed(1337)  # for reproducibility

import random
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Input, Lambda
from keras.optimizers import SGD, RMSprop
from keras import backend as K

class DataGenerator(object):
    """docstring for DataGenerator"""
    def __init__(self, batch_sz):
        # the data, shuffled and split between train and test sets
        (X_train, y_train), (X_test, y_test) = mnist.load_data()
        X_train = X_train.reshape(60000, 784)
        X_test = X_test.reshape(10000, 784)
        X_train = X_train.astype('float32')
        X_test = X_test.astype('float32')
        X_train /= 255
        X_test /= 255

        # create training+test positive and negative pairs
        digit_indices = [np.where(y_train == i)[0] for i in range(10)]
        self.tr_pairs, self.tr_y = self.create_pairs(X_train, digit_indices)

        digit_indices = [np.where(y_test == i)[0] for i in range(10)]
        self.te_pairs, self.te_y = self.create_pairs(X_test, digit_indices)

        self.tr_pairs_0 = self.tr_pairs[:, 0]
        self.tr_pairs_1 = self.tr_pairs[:, 1]
        self.te_pairs_0 = self.te_pairs[:, 0]
        self.te_pairs_1 = self.te_pairs[:, 1]

        self.batch_sz = batch_sz
        self.samples_per_train  = (self.tr_pairs.shape[0]/self.batch_sz)*self.batch_sz
        self.samples_per_val    = (self.te_pairs.shape[0]/self.batch_sz)*self.batch_sz


        self.cur_train_index=0
        self.cur_val_index=0

    def create_pairs(self, x, digit_indices):
        '''Positive and negative pair creation.
        Alternates between positive and negative pairs.
        '''
        pairs = []
        labels = []
        n = min([len(digit_indices[d]) for d in range(10)]) - 1
        for d in range(10):
            for i in range(n):
                z1, z2 = digit_indices[d][i], digit_indices[d][i+1]
                pairs += [[x[z1], x[z2]]]
                inc = random.randrange(1, 10)
                dn = (d + inc) % 10
                z1, z2 = digit_indices[d][i], digit_indices[dn][i]
                pairs += [[x[z1], x[z2]]]
                labels += [1, 0]
        return np.array(pairs), np.array(labels)

    def next_train(self):
        while 1:
            self.cur_train_index += self.batch_sz
            if self.cur_train_index >= self.samples_per_train:
                self.cur_train_index=0
            yield ([    self.tr_pairs_0[self.cur_train_index:self.cur_train_index+self.batch_sz], 
                        self.tr_pairs_1[self.cur_train_index:self.cur_train_index+self.batch_sz]
                    ],
                    self.tr_y[self.cur_train_index:self.cur_train_index+self.batch_sz]
                )

    def next_val(self):
        while 1:
            self.cur_val_index += self.batch_sz
            if self.cur_val_index >= self.samples_per_val:
                self.cur_val_index=0
            yield ([    self.te_pairs_0[self.cur_val_index:self.cur_val_index+self.batch_sz], 
                        self.te_pairs_1[self.cur_val_index:self.cur_val_index+self.batch_sz]
                    ],
                    self.te_y[self.cur_val_index:self.cur_val_index+self.batch_sz]
                )

def euclidean_distance(vects):
    x, y = vects
    return K.sqrt(K.sum(K.square(x - y), axis=1, keepdims=True))


def eucl_dist_output_shape(shapes):
    shape1, shape2 = shapes
    return (shape1[0], 1)


def contrastive_loss(y_true, y_pred):
    '''Contrastive loss from Hadsell-et-al.'06
    http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
    '''
    margin = 1
    return K.mean(y_true * K.square(y_pred) + (1 - y_true) * K.square(K.maximum(margin - y_pred, 0)))


def create_base_network(input_dim):
    '''Base network to be shared (eq. to feature extraction).
    '''
    seq = Sequential()
    seq.add(Dense(128, input_shape=(input_dim,), activation='relu'))
    seq.add(Dropout(0.1))
    seq.add(Dense(128, activation='relu'))
    seq.add(Dropout(0.1))
    seq.add(Dense(128, activation='relu'))
    return seq


def compute_accuracy(predictions, labels):
    '''Compute classification accuracy with a fixed threshold on distances.
    '''
    return labels[predictions.ravel() < 0.5].mean()


input_dim = 784
nb_epoch = 20
batch_size=128

datagen = DataGenerator(batch_size)

# network definition
base_network = create_base_network(input_dim)

input_a = Input(shape=(input_dim,))
input_b = Input(shape=(input_dim,))

# because we re-use the same instance `base_network`,
# the weights of the network
# will be shared across the two branches
processed_a = base_network(input_a)
processed_b = base_network(input_b)

distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([processed_a, processed_b])

model = Model(input=[input_a, input_b], output=distance)

# train
rms = RMSprop()
model.compile(loss=contrastive_loss, optimizer=rms)
model.fit_generator(generator=datagen.next_train(), samples_per_epoch=datagen.samples_per_train, nb_epoch=nb_epoch, validation_data=datagen.next_val(), nb_val_samples=datagen.samples_per_val)

The idea of DataGenerators is to give fit_generator a stream of data in batches.. hence giving control to you how you want to produce the data, ie whether you load from files or you do some data augmentation like what is done in ImageDataGenerator.

Here I posting the modified version of mniset siamese example with custom DataGenerator, you can work it out from here.

import numpy as np
np.random.seed(1337)  # for reproducibility

import random
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Input, Lambda
from keras.optimizers import SGD, RMSprop
from keras import backend as K

class DataGenerator(object):
    """docstring for DataGenerator"""
    def __init__(self, batch_sz):
        # the data, shuffled and split between train and test sets
        (X_train, y_train), (X_test, y_test) = mnist.load_data()
        X_train = X_train.reshape(60000, 784)
        X_test = X_test.reshape(10000, 784)
        X_train = X_train.astype('float32')
        X_test = X_test.astype('float32')
        X_train /= 255
        X_test /= 255

        # create training+test positive and negative pairs
        digit_indices = [np.where(y_train == i)[0] for i in range(10)]
        self.tr_pairs, self.tr_y = self.create_pairs(X_train, digit_indices)

        digit_indices = [np.where(y_test == i)[0] for i in range(10)]
        self.te_pairs, self.te_y = self.create_pairs(X_test, digit_indices)

        self.tr_pairs_0 = self.tr_pairs[:, 0]
        self.tr_pairs_1 = self.tr_pairs[:, 1]
        self.te_pairs_0 = self.te_pairs[:, 0]
        self.te_pairs_1 = self.te_pairs[:, 1]

        self.batch_sz = batch_sz
        self.samples_per_train  = (self.tr_pairs.shape[0]/self.batch_sz)*self.batch_sz
        self.samples_per_val    = (self.te_pairs.shape[0]/self.batch_sz)*self.batch_sz


        self.cur_train_index=0
        self.cur_val_index=0

    def create_pairs(self, x, digit_indices):
        '''Positive and negative pair creation.
        Alternates between positive and negative pairs.
        '''
        pairs = []
        labels = []
        n = min([len(digit_indices[d]) for d in range(10)]) - 1
        for d in range(10):
            for i in range(n):
                z1, z2 = digit_indices[d][i], digit_indices[d][i+1]
                pairs += [[x[z1], x[z2]]]
                inc = random.randrange(1, 10)
                dn = (d + inc) % 10
                z1, z2 = digit_indices[d][i], digit_indices[dn][i]
                pairs += [[x[z1], x[z2]]]
                labels += [1, 0]
        return np.array(pairs), np.array(labels)

    def next_train(self):
        while 1:
            self.cur_train_index += self.batch_sz
            if self.cur_train_index >= self.samples_per_train:
                self.cur_train_index=0
            yield ([    self.tr_pairs_0[self.cur_train_index:self.cur_train_index+self.batch_sz], 
                        self.tr_pairs_1[self.cur_train_index:self.cur_train_index+self.batch_sz]
                    ],
                    self.tr_y[self.cur_train_index:self.cur_train_index+self.batch_sz]
                )

    def next_val(self):
        while 1:
            self.cur_val_index += self.batch_sz
            if self.cur_val_index >= self.samples_per_val:
                self.cur_val_index=0
            yield ([    self.te_pairs_0[self.cur_val_index:self.cur_val_index+self.batch_sz], 
                        self.te_pairs_1[self.cur_val_index:self.cur_val_index+self.batch_sz]
                    ],
                    self.te_y[self.cur_val_index:self.cur_val_index+self.batch_sz]
                )

def euclidean_distance(vects):
    x, y = vects
    return K.sqrt(K.sum(K.square(x - y), axis=1, keepdims=True))


def eucl_dist_output_shape(shapes):
    shape1, shape2 = shapes
    return (shape1[0], 1)


def contrastive_loss(y_true, y_pred):
    '''Contrastive loss from Hadsell-et-al.'06
    http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
    '''
    margin = 1
    return K.mean(y_true * K.square(y_pred) + (1 - y_true) * K.square(K.maximum(margin - y_pred, 0)))


def create_base_network(input_dim):
    '''Base network to be shared (eq. to feature extraction).
    '''
    seq = Sequential()
    seq.add(Dense(128, input_shape=(input_dim,), activation='relu'))
    seq.add(Dropout(0.1))
    seq.add(Dense(128, activation='relu'))
    seq.add(Dropout(0.1))
    seq.add(Dense(128, activation='relu'))
    return seq


def compute_accuracy(predictions, labels):
    '''Compute classification accuracy with a fixed threshold on distances.
    '''
    return labels[predictions.ravel() < 0.5].mean()


input_dim = 784
nb_epoch = 20
batch_size=128

datagen = DataGenerator(batch_size)

# network definition
base_network = create_base_network(input_dim)

input_a = Input(shape=(input_dim,))
input_b = Input(shape=(input_dim,))

# because we re-use the same instance `base_network`,
# the weights of the network
# will be shared across the two branches
processed_a = base_network(input_a)
processed_b = base_network(input_b)

distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([processed_a, processed_b])

model = Model(input=[input_a, input_b], output=distance)

# train
rms = RMSprop()
model.compile(loss=contrastive_loss, optimizer=rms)
model.fit_generator(generator=datagen.next_train(), samples_per_epoch=datagen.samples_per_train, nb_epoch=nb_epoch, validation_data=datagen.next_val(), nb_val_samples=datagen.samples_per_val)

相关问答

更多
  • 正如Matias Valdenegro所提到的,Keras已经有了一个连体网络的例子。 不过,该示例仅使用密集层。 你的问题是,你需要在卷积层和密集层之间添加一个Flatten层以获得正确的形状,请参阅Keras CNN示例 这两个例子可以帮助你建立你的连体网络。 As mentioned by Matias Valdenegro, Keras already has an example of Siamese network. The example uses only dense layers, tho ...
  • ImageDataGenerator是Keras中定义的实用函数。 如果您尚未导入,请使用导入 from keras.preprocessing.image import ImageDataGenerator ImageDataGenerator is a utility function defined within Keras. If you haven't imported it import it using from keras.preprocessing.image import ImageD ...
  • 你可以使用ImageDataGenerator中的参数来玩https://keras.io/preprocessing/image/ 以下参数可能会为您解决此问题: featurewise_center:布尔值。 在数据集中将输入平均值设置为0,功能明智。 samplewise_center:布尔值。 将每个样本均值设置为0。 featurewise_std_normalization:布尔值。 以数据集的std划分输入,功能明智。 samplewise_std_normalization:布尔值。 按标准 ...
  • 好吧 - 当您知道batch_size您可以从flow_from_directory对象获取图像数量: test_batches = ImageDataGenerator().flow_from_directory(.., batch_size=n) number_of_examples = len(test_batches.filenames) number_of_generator_calls = math.ceil(number_of_examples / (1.0 * n)) # 1.0 abov ...
  • 我尝试过GAN也遇到了这个错误。 当我使用相同代码的CPU版本的tensorflow是可以的,但在GPU版本上没有。 我发现这个问题是由GPU版本上的kernel_regularizer参数引起的。 您可以删除参数并再次尝试。 我不知道为什么这解决了这个问题。 我想在处理重用模型时可能会出错。 As pointed out by Daniel Falbel in his comment, the solution was updating R-keras package and then updating ...
  • 分离发电机进行培训和验证的原因是因为您在两种情况下对待数据的方式不同。 例如,在处理批量图像时,在训练过程中,您需要在每个时期对图像进行洗牌,以便模型不会过度适合训练数据。 而在测试或验证过程中则不需要。 此外,您可以在训练时执行特殊的数据增强技术(如缩放,移位,旋转),以增强训练数据并使模型健壮。 验证您的训练模型时,您无需执行任何此类操作。 因此,为了在产生批次时适应这些差异,建议为您的培训和验证分配独立的发电机。 ImageDataGenerator是Keras中的一类 Image preproces ...
  • DataGenerators的想法是给fit_generator批量处理数据流..因此可以控制您如何生成数据,即从文件加载还是像ImageDataGenerator所做的那样进行数据增强。 在这里,我发布了自定义DataGenerator的mniset siamese示例的修改版本,您可以从这里开始。 import numpy as np np.random.seed(1337) # for reproducibility import random from keras.datasets import ...
  • 你可以在以下地址找到任何源代码: https://github.com/keras-team/keras 这里是ImageDataGenerator: https://github.com/keras-team/keras/blob/master/keras/preprocessing/image.py/#L374 keras文档页面也有链接导向你: https://keras.io/preprocessing/image/ 在内部, ImageDataGenerator将对您提供的图像进行一系列不同的数据 ...
  • 您的ImageDataGenerator没有问题。 如错误消息中所述,模型输出的形状与其目标的形状之间存在不匹配。 您使用class_mode = 'binary' ,因此您的模型的预期输出是单个值,但它会产生shape的输出(batch_size, 64, 64, 4)因为您的模型中没有其他卷积层。 尝试这样的事情: model.add(Convolution2D(4, 4, 4, border_mode='same', input_shape=(64, 64,3), activation='relu') ...
  • 问题是random_transform需要一个3D数组。 查看文档字符串: def random_transform(self, x, seed=None): """Randomly augment a single image tensor. # Arguments x: 3D tensor, single image. seed: random seed. # Returns A randomly transformed versi ...

相关文章

更多

最新问答

更多
  • 获取MVC 4使用的DisplayMode后缀(Get the DisplayMode Suffix being used by MVC 4)
  • 如何通过引用返回对象?(How is returning an object by reference possible?)
  • 矩阵如何存储在内存中?(How are matrices stored in memory?)
  • 每个请求的Java新会话?(Java New Session For Each Request?)
  • css:浮动div中重叠的标题h1(css: overlapping headlines h1 in floated divs)
  • 无论图像如何,Caffe预测同一类(Caffe predicts same class regardless of image)
  • xcode语法颜色编码解释?(xcode syntax color coding explained?)
  • 在Access 2010 Runtime中使用Office 2000校对工具(Use Office 2000 proofing tools in Access 2010 Runtime)
  • 从单独的Web主机将图像传输到服务器上(Getting images onto server from separate web host)
  • 从旧版本复制文件并保留它们(旧/新版本)(Copy a file from old revision and keep both of them (old / new revision))
  • 西安哪有PLC可控制编程的培训
  • 在Entity Framework中选择基类(Select base class in Entity Framework)
  • 在Android中出现错误“数据集和渲染器应该不为null,并且应该具有相同数量的系列”(Error “Dataset and renderer should be not null and should have the same number of series” in Android)
  • 电脑二级VF有什么用
  • Datamapper Ruby如何添加Hook方法(Datamapper Ruby How to add Hook Method)
  • 金华英语角.
  • 手机软件如何制作
  • 用于Android webview中图像保存的上下文菜单(Context Menu for Image Saving in an Android webview)
  • 注意:未定义的偏移量:PHP(Notice: Undefined offset: PHP)
  • 如何读R中的大数据集[复制](How to read large dataset in R [duplicate])
  • Unity 5 Heighmap与地形宽度/地形长度的分辨率关系?(Unity 5 Heighmap Resolution relationship to terrain width / terrain length?)
  • 如何通知PipedOutputStream线程写入最后一个字节的PipedInputStream线程?(How to notify PipedInputStream thread that PipedOutputStream thread has written last byte?)
  • python的访问器方法有哪些
  • DeviceNetworkInformation:哪个是哪个?(DeviceNetworkInformation: Which is which?)
  • 在Ruby中对组合进行排序(Sorting a combination in Ruby)
  • 网站开发的流程?
  • 使用Zend Framework 2中的JOIN sql检索数据(Retrieve data using JOIN sql in Zend Framework 2)
  • 条带格式类型格式模式编号无法正常工作(Stripes format type format pattern number not working properly)
  • 透明度错误IE11(Transparency bug IE11)
  • linux的基本操作命令。。。