首页 \ 问答 \ 无法在mapper,MapReduce中访问hashmap(Can't access hashmap in mapper, MapReduce)

无法在mapper,MapReduce中访问hashmap(Can't access hashmap in mapper, MapReduce)

我想使用另一个文件中定义的字典(csv)替换mapper中输入数据的值。 所以我试着将csv数据放到HashMap中并在mapper中引用它。

下面的java代码和csv是我程序的简化版本。 此代码适用于我的本地环境 (Mac OS X,伪分布式模式),但不适用于我的EC2实例(ubuntu,伪分布式模式)

详细地说,我在这个过程中得到了这个标准:

cat:4
human:2
flamingo:1

这意味着文件读取器成功地将csv数据放入HashMap。

然而,映射器没有映射任何内容因此我在EC2环境中得到空输出,尽管它映射了3 *(输入文件的行数)元素并在本地生成以下内容:

test,cat
test,flamingo
test,human

有没有人有答案或提示?

Test.java

import java.io.IOException;
import java.util.StringTokenizer;
import java.io.FileReader;
import java.io.BufferedReader;
import java.io.DataInput; 
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.mapreduce.RecordWriter;
import org.apache.hadoop.io.WritableUtils;

public class Test {

  public static HashMap<String, Integer> map  = new HashMap<String, Integer>();

  public static class Mapper1 extends Mapper<LongWritable, Text, Text, Text> {
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
      for(Map.Entry<String, Integer> e : map.entrySet()) {
        context.write(new Text(e.getKey()), new Text("test"));
      }
    }
  }

  public static class Reducer1 extends Reducer<Text, Text, Text, Text> {
    @Override
    protected void reduce(Text key, Iterable<Text> vals, Context context) throws IOException, InterruptedException {
      context.write(new Text("test"), key);
    }
  }

  public static class CommaTextOutputFormat extends TextOutputFormat<Text, Text> {
    @Override
    public RecordWriter<Text, Text> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException {
      Configuration conf = job.getConfiguration();
      String extension = ".txt";
      Path file = getDefaultWorkFile(job, extension);
      FileSystem fs = file.getFileSystem(conf);
      FSDataOutputStream fileOut = fs.create(file, false);
      return new LineRecordWriter<Text, Text>(fileOut, ",");
    }
  }

  public static void get_list(String list_path){
    try {
      FileReader fr = new FileReader(list_path);
      BufferedReader br = new BufferedReader(fr);
      String line = null, name = null;
      int leg = 0;

      while ((line = br.readLine()) != null) {
        if (!line.startsWith("name") && !line.trim().isEmpty()) {
          String[] name_leg = line.split(",", 0);
          name = name_leg[0];
          leg = Integer.parseInt(name_leg[1]);
          map.put(name, leg);
        }
      }
      br.close();
    }
    catch(IOException ex) {
      System.err.println(ex.getMessage());
      ex.printStackTrace();
    }

    for(Map.Entry<String, Integer> e : map.entrySet()) {
      System.out.println(e.getKey() + ":" + e.getValue());
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();

    if (args.length != 3) {
      System.err.println(
        "Need 3 arguments: <input dir> <output base dir> <list path>");
      System.exit(1);
    }

    get_list(args[2]);
    Job job = Job.getInstance(conf, "test");

    job.setJarByClass(Test.class);
    job.setMapperClass(Mapper1.class);
    job.setReducerClass(Reducer1.class);
    job.setNumReduceTasks(1);
    job.setInputFormatClass(TextInputFormat.class);

    // mapper output
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(Text.class);

    // reducer output
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(Text.class);

    // formtter
    job.setOutputFormatClass(CommaTextOutputFormat.class);

    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    if(!job.waitForCompletion(true)){
      System.exit(1);
    }

    System.out.println("All Finished");
    System.exit(0);
  }
}

list.csv(args [2])

name,legs
cat,4
human,2
flamingo,1

=================================

我参考@Rahul Sharma的回答并修改我的代码如下。 然后我的代码在两种环境中都有效。

非常感谢@Rahul Sharma和@Serhiy的精确答案和有用的评论。

Test.java

import java.io.IOException;
import java.util.StringTokenizer;
import java.io.FileReader;
import java.io.BufferedReader;
import java.io.DataInput; 
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.net.URI;
import java.io.InputStreamReader;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.mapreduce.RecordWriter;
import org.apache.hadoop.io.WritableUtils;

public class Test {

  public static HashMap<String, Integer> map  = new HashMap<String, Integer>();

  public static class Mapper1 extends Mapper<LongWritable, Text, Text, Text> {

    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
      URI[] files = context.getCacheFiles();
      Path list_path = new Path(files[0]);

      try {
        FileSystem fs = list_path.getFileSystem(context.getConfiguration());
        BufferedReader br = new BufferedReader(new InputStreamReader(fs.open(list_path)));
        String line = null, name = null;
        int leg = 0;

        while ((line = br.readLine()) != null) {
          if (!line.startsWith("name") && !line.trim().isEmpty()) {
            String[] name_leg = line.split(",", 0);
            name = name_leg[0];
            leg = Integer.parseInt(name_leg[1]);
            map.put(name, leg);
          }
        }
        br.close();
      }
      catch(IOException ex) {
        System.err.println(ex.getMessage());
        ex.printStackTrace();
      }

      for(Map.Entry<String, Integer> e : map.entrySet()) {
        System.out.println(e.getKey() + ":" + e.getValue());
      }
    }

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
      for(Map.Entry<String, Integer> e : map.entrySet()) {
        context.write(new Text(e.getKey()), new Text("test"));
      }
    }

  }

  public static class Reducer1 extends Reducer<Text, Text, Text, Text> {
    @Override
    protected void reduce(Text key, Iterable<Text> vals, Context context) throws IOException, InterruptedException {
      context.write(new Text("test"), key);
    }
  }

  // Writer
  public static class CommaTextOutputFormat extends TextOutputFormat<Text, Text> {
    @Override
    public RecordWriter<Text, Text> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException {
      Configuration conf = job.getConfiguration();
      String extension = ".txt";
      Path file = getDefaultWorkFile(job, extension);
      FileSystem fs = file.getFileSystem(conf);
      FSDataOutputStream fileOut = fs.create(file, false);
      return new LineRecordWriter<Text, Text>(fileOut, ",");
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();

    if (args.length != 3) {
      System.err.println(
        "Need 3 arguments: <input dir> <output base dir> <list path>");
      System.exit(1);
    }

    Job job = Job.getInstance(conf, "test");
    job.addCacheFile(new Path(args[2]).toUri());

    job.setJarByClass(Test.class);
    job.setMapperClass(Mapper1.class);
    job.setReducerClass(Reducer1.class);
    job.setNumReduceTasks(1);
    job.setInputFormatClass(TextInputFormat.class);

    // mapper output
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(Text.class);

    // reducer output
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(Text.class);

    // formtter
    job.setOutputFormatClass(CommaTextOutputFormat.class);

    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    if(!job.waitForCompletion(true)){
      System.exit(1);
    }

    System.out.println("All Finished");
    System.exit(0);
  }
}

I'd like to replace values of input data in my mapper, using dictionalies(csv) defined in another file. So I tried to put the csv data to HashMap and refer it in the mapper.

The java code and csv below are simplified version of my program. This code works in my local environment(Mac OS X, pseudo-distributed mode), but doesn't in my EC2 instance(ubuntu, pseudo-distributed mode).

In detail, I got this stdout in process:

cat:4
human:2
flamingo:1

this means the filereader successfully put csv data into HashMap.

However the mapper mapped nothing and therefore I got empty output in the EC2 environment, although it mapped 3 * (the number of lines of the input file) elements and generated the following in the local:

test,cat
test,flamingo
test,human

Does anyone have answers or hints?

Test.java

import java.io.IOException;
import java.util.StringTokenizer;
import java.io.FileReader;
import java.io.BufferedReader;
import java.io.DataInput; 
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.mapreduce.RecordWriter;
import org.apache.hadoop.io.WritableUtils;

public class Test {

  public static HashMap<String, Integer> map  = new HashMap<String, Integer>();

  public static class Mapper1 extends Mapper<LongWritable, Text, Text, Text> {
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
      for(Map.Entry<String, Integer> e : map.entrySet()) {
        context.write(new Text(e.getKey()), new Text("test"));
      }
    }
  }

  public static class Reducer1 extends Reducer<Text, Text, Text, Text> {
    @Override
    protected void reduce(Text key, Iterable<Text> vals, Context context) throws IOException, InterruptedException {
      context.write(new Text("test"), key);
    }
  }

  public static class CommaTextOutputFormat extends TextOutputFormat<Text, Text> {
    @Override
    public RecordWriter<Text, Text> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException {
      Configuration conf = job.getConfiguration();
      String extension = ".txt";
      Path file = getDefaultWorkFile(job, extension);
      FileSystem fs = file.getFileSystem(conf);
      FSDataOutputStream fileOut = fs.create(file, false);
      return new LineRecordWriter<Text, Text>(fileOut, ",");
    }
  }

  public static void get_list(String list_path){
    try {
      FileReader fr = new FileReader(list_path);
      BufferedReader br = new BufferedReader(fr);
      String line = null, name = null;
      int leg = 0;

      while ((line = br.readLine()) != null) {
        if (!line.startsWith("name") && !line.trim().isEmpty()) {
          String[] name_leg = line.split(",", 0);
          name = name_leg[0];
          leg = Integer.parseInt(name_leg[1]);
          map.put(name, leg);
        }
      }
      br.close();
    }
    catch(IOException ex) {
      System.err.println(ex.getMessage());
      ex.printStackTrace();
    }

    for(Map.Entry<String, Integer> e : map.entrySet()) {
      System.out.println(e.getKey() + ":" + e.getValue());
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();

    if (args.length != 3) {
      System.err.println(
        "Need 3 arguments: <input dir> <output base dir> <list path>");
      System.exit(1);
    }

    get_list(args[2]);
    Job job = Job.getInstance(conf, "test");

    job.setJarByClass(Test.class);
    job.setMapperClass(Mapper1.class);
    job.setReducerClass(Reducer1.class);
    job.setNumReduceTasks(1);
    job.setInputFormatClass(TextInputFormat.class);

    // mapper output
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(Text.class);

    // reducer output
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(Text.class);

    // formtter
    job.setOutputFormatClass(CommaTextOutputFormat.class);

    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    if(!job.waitForCompletion(true)){
      System.exit(1);
    }

    System.out.println("All Finished");
    System.exit(0);
  }
}

list.csv (args[2])

name,legs
cat,4
human,2
flamingo,1

=================================

I refer to @Rahul Sharma 's answer and modifiy my code as below. Then my code works in the both environments.

Thank you very much @Rahul Sharma and @Serhiy for your precise answer and useful comments.

Test.java

import java.io.IOException;
import java.util.StringTokenizer;
import java.io.FileReader;
import java.io.BufferedReader;
import java.io.DataInput; 
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.net.URI;
import java.io.InputStreamReader;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.mapreduce.RecordWriter;
import org.apache.hadoop.io.WritableUtils;

public class Test {

  public static HashMap<String, Integer> map  = new HashMap<String, Integer>();

  public static class Mapper1 extends Mapper<LongWritable, Text, Text, Text> {

    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
      URI[] files = context.getCacheFiles();
      Path list_path = new Path(files[0]);

      try {
        FileSystem fs = list_path.getFileSystem(context.getConfiguration());
        BufferedReader br = new BufferedReader(new InputStreamReader(fs.open(list_path)));
        String line = null, name = null;
        int leg = 0;

        while ((line = br.readLine()) != null) {
          if (!line.startsWith("name") && !line.trim().isEmpty()) {
            String[] name_leg = line.split(",", 0);
            name = name_leg[0];
            leg = Integer.parseInt(name_leg[1]);
            map.put(name, leg);
          }
        }
        br.close();
      }
      catch(IOException ex) {
        System.err.println(ex.getMessage());
        ex.printStackTrace();
      }

      for(Map.Entry<String, Integer> e : map.entrySet()) {
        System.out.println(e.getKey() + ":" + e.getValue());
      }
    }

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
      for(Map.Entry<String, Integer> e : map.entrySet()) {
        context.write(new Text(e.getKey()), new Text("test"));
      }
    }

  }

  public static class Reducer1 extends Reducer<Text, Text, Text, Text> {
    @Override
    protected void reduce(Text key, Iterable<Text> vals, Context context) throws IOException, InterruptedException {
      context.write(new Text("test"), key);
    }
  }

  // Writer
  public static class CommaTextOutputFormat extends TextOutputFormat<Text, Text> {
    @Override
    public RecordWriter<Text, Text> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException {
      Configuration conf = job.getConfiguration();
      String extension = ".txt";
      Path file = getDefaultWorkFile(job, extension);
      FileSystem fs = file.getFileSystem(conf);
      FSDataOutputStream fileOut = fs.create(file, false);
      return new LineRecordWriter<Text, Text>(fileOut, ",");
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();

    if (args.length != 3) {
      System.err.println(
        "Need 3 arguments: <input dir> <output base dir> <list path>");
      System.exit(1);
    }

    Job job = Job.getInstance(conf, "test");
    job.addCacheFile(new Path(args[2]).toUri());

    job.setJarByClass(Test.class);
    job.setMapperClass(Mapper1.class);
    job.setReducerClass(Reducer1.class);
    job.setNumReduceTasks(1);
    job.setInputFormatClass(TextInputFormat.class);

    // mapper output
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(Text.class);

    // reducer output
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(Text.class);

    // formtter
    job.setOutputFormatClass(CommaTextOutputFormat.class);

    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    if(!job.waitForCompletion(true)){
      System.exit(1);
    }

    System.out.println("All Finished");
    System.exit(0);
  }
}

原文:
更新时间:2021-09-10 08:09

最满意答案

您提供的示例在HTML和XHTML中无效。 除非启用了脚本, 否则当前建议不提供包含样式表的方法。

一般来说,你应该避免<noscript> 。 从有效的东西开始,然后在它上面构建。

在这种情况下,您可以为非JS客户端编写样式表,然后:

<body>
<script type="text/javascript">document.body.className += ' js';</script>

...并包含特定于JS的其他规则集:

body.js foo {
}

或者,您可以执行以下操作:

<link href="css/stylenojs.css" rel="stylesheet" type="text/css" id="nojscss" />

var nojscss = document.getElementById('nojscss');
nojscss.parentNode.removeChild(nojscss);

The example you give is invalid in HTML as well as XHTML. No current recommendation provides a way to include a stylesheet unless scripting is enabled.

In general, you should avoid <noscript>. Start with something that works, and then build on it.

In this case, you could write your stylesheet for non-JS clients, then:

<body>
<script type="text/javascript">document.body.className += ' js';</script>

… and include additional rule-sets specific to JS being enabled with:

body.js foo {
}

Alternatively, you could do something like:

<link href="css/stylenojs.css" rel="stylesheet" type="text/css" id="nojscss" />

and

var nojscss = document.getElementById('nojscss');
nojscss.parentNode.removeChild(nojscss);

相关问答

更多
  • 我实际上是在写这个问题,为什么上面三篇关于浏览器一致性和格式良好的 html的帖子被否决了? 众所周知,HTML是一种行业标准。 浏览器被实现,以便它们呈现HTML标准中描述的标记内容。 不幸的是,有些地区在HTML中还没有很好地定义:如果用户忘记了结束标签,或者如果找不到引用的图像该怎么办? 某些浏览器使用'alt'标签来放置占位符文本项目,一些浏览器将'alt'标签显示为工具提示。 着名的“怪癖”模式的浏览器是这种缺乏清晰度的结果。 正因为如此,相同的网页在不同的浏览器上显示的方式很可能会有所不同。 同 ...
  • 如果你希望结果是有效的XHTML,我相信你需要使用XML命名空间而不是定制的DTD。 DTD不仅定义语言(因此,自定义DTD不是“真正”的XHTML),但它会将任何将其读入怪癖模式的浏览器抛出,即使它们不会扼杀文件。 另一方面,使用名称空间将生成完全有效的XHTML(尽管并非所有验证器都具有名称空间感知功能,并且可能无法正确验证它),并且允许浏览器以(接近)标准模式工作。
  • 最基本的方式是 。 包含的内容必须放在 。 启动主页/page.xhtml :
    XHTML和HTML基本上是一样的,只是xhtml是基于xml标准的(这是x来自哪里),因此会更严格一些。 HTML / XHTML通常用于网页结构,因为PHP是基于服务器的语言,这意味着它在幕后工作。 你可以使用html,但它会非常复杂,所以我会说你会更好地咬住子弹,并开始你的第一个PHP应用程序:)不要担心这很容易让你的头转过身来。 你不需要一个域来开始开发,只需安装WAMP (对于Windows)或MAMP (如果你的苹果怪像我),这些程序就像自包含的微型服务器一样,对于开发非常有用! 然后,我建议尝 ...
  • 您可以使用Validator.nu(X)HTML5 Validator(生活验证器): http://html5.validator.nu/ 请注意,“生活验证程序”意味着由于HTML 5规范本身不断发展,验证程序也一样 - 验证结果可能会随着时间的推移而变化(无效标记可能会变得有效,否则)。 另外,请注意W3C标记验证服务在验证之后明确指出: 位于<...>的文档已成功检查为HTML5。 这意味着有问题的资源将自己标识为“HTML5”,并且我们已成功对其进行了正式验证。 我们用于此检查的解析器实现基于va ...
  • 如果您的XQuery处理器本身没有序列化结果,您必须告诉我们有关您的平台以及如何使用结果的更多信息。 如果您的处理器序列化结果本身并且是1.0,那么您必须告诉我们您使用哪一个并查看其文档以了解依赖于实现的开关。 最后但并非最不重要的是,如果您使用XQuery 3.0并且您的处理器负责序列化,您可以使用以下内容告诉它使用正确的DOCTYPE将输出树序列化为XHTML(请参阅http中的相关定义: //w3.org/TR/xquery-30/#id-serialization和http://w3.org/TR/ ...
  • 此代码创建一个新的 ...并包含特定于JS的其他规则集: body.js foo { } 或者,您可以执行以下操作:

相关文章

更多

最新问答

更多
  • 您如何使用git diff文件,并将其应用于同一存储库的副本的本地分支?(How do you take a git diff file, and apply it to a local branch that is a copy of the same repository?)
  • 将长浮点值剪切为2个小数点并复制到字符数组(Cut Long Float Value to 2 decimal points and copy to Character Array)
  • OctoberCMS侧边栏不呈现(OctoberCMS Sidebar not rendering)
  • 页面加载后对象是否有资格进行垃圾回收?(Are objects eligible for garbage collection after the page loads?)
  • codeigniter中的语言不能按预期工作(language in codeigniter doesn' t work as expected)
  • 在计算机拍照在哪里进入
  • 使用cin.get()从c ++中的输入流中丢弃不需要的字符(Using cin.get() to discard unwanted characters from the input stream in c++)
  • No for循环将在for循环中运行。(No for loop will run inside for loop. Testing for primes)
  • 单页应用程序:页面重新加载(Single Page Application: page reload)
  • 在循环中选择具有相似模式的列名称(Selecting Column Name With Similar Pattern in a Loop)
  • System.StackOverflow错误(System.StackOverflow error)
  • KnockoutJS未在嵌套模板上应用beforeRemove和afterAdd(KnockoutJS not applying beforeRemove and afterAdd on nested templates)
  • 散列包括方法和/或嵌套属性(Hash include methods and/or nested attributes)
  • android - 如何避免使用Samsung RFS文件系统延迟/冻结?(android - how to avoid lag/freezes with Samsung RFS filesystem?)
  • TensorFlow:基于索引列表创建新张量(TensorFlow: Create a new tensor based on list of indices)
  • 企业安全培训的各项内容
  • 错误:RPC失败;(error: RPC failed; curl transfer closed with outstanding read data remaining)
  • C#类名中允许哪些字符?(What characters are allowed in C# class name?)
  • NumPy:将int64值存储在np.array中并使用dtype float64并将其转换回整数是否安全?(NumPy: Is it safe to store an int64 value in an np.array with dtype float64 and later convert it back to integer?)
  • 注销后如何隐藏导航portlet?(How to hide navigation portlet after logout?)
  • 将多个行和可变行移动到列(moving multiple and variable rows to columns)
  • 提交表单时忽略基础href,而不使用Javascript(ignore base href when submitting form, without using Javascript)
  • 对setOnInfoWindowClickListener的意图(Intent on setOnInfoWindowClickListener)
  • Angular $资源不会改变方法(Angular $resource doesn't change method)
  • 在Angular 5中不是一个函数(is not a function in Angular 5)
  • 如何配置Composite C1以将.m和桌面作为同一站点提供服务(How to configure Composite C1 to serve .m and desktop as the same site)
  • 不适用:悬停在悬停时:在元素之前[复制](Don't apply :hover when hovering on :before element [duplicate])
  • 常见的python rpc和cli接口(Common python rpc and cli interface)
  • Mysql DB单个字段匹配多个其他字段(Mysql DB single field matching to multiple other fields)
  • 产品页面上的Magento Up出售对齐问题(Magento Up sell alignment issue on the products page)