Chrome在Linux上的命令行远程控制?(How do I programatically control a Chrome instance from the command line?)
我可以通过命令行控制正在运行的Google Chrome实例吗? 我希望有一种类似于
-remote
标志,可以让我发送一些javascript或其他内容。 特别是,我想在最前面的窗口重新加载最上面的文档。How do I control a running Google Chrome instance from the command line? I'm hoping there's something like a
-remote
flag that will allow me to send in some javascript or somesuch. In particular, I would like to reload the topmost document on the foremost window.I'm especially interested in Linux/MacOS solutions.
原文:https://stackoverflow.com/questions/2091258
最满意答案
根据你的代码片段,我想我会做这样的事情,将文件分成8部分,然后由4名工作人员完成计算( 为什么需要8个块和4个工人?只是我为这个例子做出的一个随机选择。 ):
from multiprocessing import Pool import itertools def myfunction(lines): returnlist = [] for line in lines: list_of_elem = line.split(",") elem_id = list_of_elem[1] elem_to_check = list_of_elem[5] ids = list_of_elem[2].split("|") for x in itertools.permutations(ids,2): returnlist.append(",".join( [elem_id,x,"1\n" if x[1] == elem_to_check else "0\n"])) return returnlist def chunk(it, size): it = iter(it) return iter(lambda: tuple(itertools.islice(it, size)), ()) if __name__ == "__main__": my_data = open(r"my_input_file_to_be_processed.txt","r") my_data = my_data.read().split("\n") prep = [strings for strings in chunk(my_data, round(len(my_data) / 8))] with Pool(4) as p: res = p.map(myfunction, prep) result = res.pop(0) _ = list(map(lambda x: result.extend(x), res)) print(result) # ... or do something with the result
编辑:这是假设你有信心所有行都以相同的方式格式化,并不会导致错误。
根据你的意见,看看你的函数/你的文件的内容有什么问题,可以通过在没有
multiprocessing
或使用try /的情况下测试它,除非以相当大/很丑的方式来确保产生输出( 例外或结果):def myfunction(lines): returnlist = [] for line in lines: try: list_of_elem = line.split(",") elem_id = list_of_elem[1] elem_to_check = list_of_elem[5] ids = list_of_elem[2].split("|") for x in itertools.permutations(ids,2): returnlist.append(",".join( [elem_id,x,"1\n" if x[1] == elem_to_check else "0\n"])) except Exception as err: returnlist.append('I encountered error {} on line {}'.format(err, line)) return returnlist
According to your snippet of code I guess I would do something like this to chunk the file in 8 parts and then make the computation to be done by 4 workers (why 8 chunks and 4 workers ? Just a random choice I made for the example.) :
from multiprocessing import Pool import itertools def myfunction(lines): returnlist = [] for line in lines: list_of_elem = line.split(",") elem_id = list_of_elem[1] elem_to_check = list_of_elem[5] ids = list_of_elem[2].split("|") for x in itertools.permutations(ids,2): returnlist.append(",".join( [elem_id,x,"1\n" if x[1] == elem_to_check else "0\n"])) return returnlist def chunk(it, size): it = iter(it) return iter(lambda: tuple(itertools.islice(it, size)), ()) if __name__ == "__main__": my_data = open(r"my_input_file_to_be_processed.txt","r") my_data = my_data.read().split("\n") prep = [strings for strings in chunk(my_data, round(len(my_data) / 8))] with Pool(4) as p: res = p.map(myfunction, prep) result = res.pop(0) _ = list(map(lambda x: result.extend(x), res)) print(result) # ... or do something with the result
Edit : This is assuming you are confident all lines are formatted in the same way and will cause no error.
According to your comments it might be useful to see what is the problem in your function/the content of your file by testing it without
multiprocessing
or using try/except in a pretty large/ugly way to be almost sure that an output will be produced (either the exception or the result) :def myfunction(lines): returnlist = [] for line in lines: try: list_of_elem = line.split(",") elem_id = list_of_elem[1] elem_to_check = list_of_elem[5] ids = list_of_elem[2].split("|") for x in itertools.permutations(ids,2): returnlist.append(",".join( [elem_id,x,"1\n" if x[1] == elem_to_check else "0\n"])) except Exception as err: returnlist.append('I encountered error {} on line {}'.format(err, line)) return returnlist
相关问答
更多-
如何使用python的multiprocess多进程[2023-11-08]
#!/usr/bin/env python # encoding: utf-8 from multiprocessing.dummy import Pool as ThreadPool import socket import time def scan(port): s=socket.socket(socket.AF_INET,socket.SOCK_STREAM) s.settimeout(0.1) ip='220.181.136.241' #print port try: s.connect((ip, ... -
Python多进程分析(Python multiprocess profiling)[2022-11-22]
您正在分析流程启动,这就是为什么您只能看到p.start()发生的情况,而p.start()在子流程启动后返回。 您需要在worker方法中进行配置,这将在子进程中调用。 You're profiling the process startup, which is why you're only seeing what happens in p.start() as you say—and p.start() returns once the subprocess is kicked off. You ne ... -
根据你的代码片段,我想我会做这样的事情,将文件分成8部分,然后由4名工作人员完成计算( 为什么需要8个块和4个工人?只是我为这个例子做出的一个随机选择。 ): from multiprocessing import Pool import itertools def myfunction(lines): returnlist = [] for line in lines: list_of_elem = line.split(",") elem_id = li ...
-
如果我的理解正确并且希望第二种方法根据P3的值播放另一个声音或歌曲,则需要根据用户选择从DEF1返回一个值。 所以除了'打印(P3)'之外,你还想包括'返回P3'。 然后你从DEF2中调用DEF1,将返回赋值给一个变量,例如: 歌曲= DEF1() 变量“歌曲”则保留了问题答案的价值:“它是什么?” 这样做可以让DEF2在所有问题和答案中调用DEF1。 然后DEF2使用DEF1返回来确定播放哪首歌曲。 所以不是while语句,你可能需要一个if语句(你的while语句无论如何都不会工作,因为它没有循环的条件 ...
-
Python多进程(Python multiprocess)[2022-01-31]
主进程拥有标准输入,而分叉进程则没有。 什么工作将是使用multiprocessing.dummy不会创建子进程,但线程 。 def one(stdin): xyz = input("enter: ") print(xyz) time.sleep(1) if __name__=='__main__': from multiprocessing.dummy import Process import time p1 = Process(target = o ... -
在Windows中,进程不像Linux / Unix中那样分叉 。 相反,它们会产生 ,这意味着每个新的multiprocessing.Process都会启动一个新的Python解释器。 这意味着所有的全局变量都会被重新初始化,并且如果你在某种程度上操纵了它们,那么产生的过程就不会看到这些变量。 解决这个问题的方法是将全局变量传递给Pool initilaizer ,然后从全局变为global过程: from multiprocessing import Pool def init_pool(the_li ...
-
Pool.map工作方式类似于内置map函数。它每次从第二个参数中获取一个元素,并将其传递给第一个参数表示的函数。 if __name__ == '__main__': from multiprocessing import Pool # I store my lists in a file f_in = open(stacking_path + "stacks.txt", 'r') f_stack = f_in.readlines() img_list = [] ...
-
你为什么要关闭文件? 假设这是一个linuxy系统,分叉的环境就好了。 如果关闭子文件中的文件,它将刷新仍在缓冲区中的所有数据...但是同一数据将在父项中再次刷新,从而导致重复数据。 import multiprocessing as mp fd = None def worker(): # child closes file, flushing "1" fd.close() def doit(): global fd fd = open('deleteme', 'w' ...
-
在multimechanize/core.py : # scripts have access to these vars, which can be useful for loading unique data trans.thread_num = self.thread_num trans.process_num = self.process_num 因此,每个Transaction()实例都可以访问其线程ID /进程ID。 In multimechanize/core.py: # scripts h ...
-
五个月前我在你的地方。 我环顾了好几次,但我的结论是使用Python进行多处理完全解决了你所描述的问题: 管道和队列是好的,但不是我的经验中的大对象 Manager()代理对象除了数组之外很慢,而且这些对象是有限的。 如果你想共享一个复杂的数据结构,可以使用像这样完成的命名空间: python中的多处理 - 在多个进程之间共享大对象(例如pandas dataframe) Manager()有一个您正在寻找的共享列表: https : //docs.python.org/3.6/library/multip ...