如何在PHP当前时间延迟30秒?(How to get delay of 30 sec in current time in php?)
日期(“F d YH:i:s”)给出了当前时间。 如何使用php在当前时间内延迟30秒和1分钟,不确定但是像Date(“F d YH:i:s-30”)或Date(“F d YH:i-1:s”)之类的东西)?
Date("F d Y H:i:s") gives current time in php. How can I get delay of 30 secs and 1 min in current time using php, not sure but something like Date ("F d Y H:i:s-30") or Date("F d Y H:i-1:s") ?
原文:https://stackoverflow.com/questions/1853516
最满意答案
如果这是从昨天起你的项目的延续,你已经在内存中有你的下载列表 - 只需在他们的进程完成下载时从加载的列表中删除条目,并且只有在你退出''时才在输入文件中写下整个列表。下载”。 没有理由不断写下这些变化。
如果你想知道(比如来自外部进程),即使在你的'下载程序'运行时也会下载url,每次进程返回下载成功时,在
downloaded.dat
写一个新行。当然,在这两种情况下,从主进程/线程中写入,这样您就不必担心互斥锁。
更新 - 以下是如何使用与昨天相同的代码库使用其他文件:
def init_downloader(params): # our downloader initializator downloader = Downloader(**params[0]) # instantiate our downloader downloader.run(params[1]) # run our downloader return params # job finished, return the same params for identification if __name__ == "__main__": # important protection for cross-platform use downloader_params = [ # Downloaders will be initialized using these params {"port_number": 7751}, {"port_number": 7851}, {"port_number": 7951} ] downloader_cycle = cycle(downloader_params) # use a cycle for round-robin distribution with open("downloaded_links.dat", "a+") as diff_file: # open your diff file diff_file.seek(0) # rewind the diff file to the beginning to capture all lines diff_links = {row.strip() for row in diff_file} # load downloaded links into a set with open("input_links.dat", "r+") as input_file: # open your input file available_links = [] download_jobs = [] # store our downloader parameters + a link here # read our file line by line and filter out downloaded links for row in input_file: # loop through our file link = row.strip() # remove the extra whitespace to get the link if link not in diff_links: # make sure link is not already downloaded available_links.append(row) download_jobs.append([next(downloader_cycle), link]) input_file.seek(0) # rewind our input file input_file.truncate() # clear out the input file input_file.writelines(available_links) # store back the available links diff_file.seek(0) # rewind the diff file diff_file.truncate() # blank out the diff file now that the input is updated # and now let's get to business... if download_jobs: download_pool = Pool(processes=5) # make our pool use 5 processes # run asynchronously so we can capture results as soon as they ar available for response in download_pool.imap_unordered(init_downloader, download_jobs): # since it returns the same parameters, the second item is a link # add the link to our `diff` file so it doesn't get downloaded again diff_file.write(response[1] + "\n") else: print("Nothing left to download...")
正如我在评论中所写的那样,整个想法是在下载链接时使用文件存储下载的链接,然后在下次运行时过滤掉下载的链接并更新输入文件。 这样即使你强行杀死它,它也会一直从它停止的地方恢复(部分下载除外)。
If this is a continuation of your project from yesterday you already have your download list in memory - just remove the entries from the loaded list as their processes finish download and only write down the whole list over the input file once you're exiting the 'downloader'. There is no reason to constantly write down the changes.
If you want to know (say from an external process) when a url gets downloaded even while your 'downloader' is running, write in a
downloaded.dat
a new line each time a process returns that download was successful.Of course, in both cases, write from within your main process/thread so you don't have to worry about mutex.
UPDATE - Here's how to do it with an additional file, using the same code base as yesterday:
def init_downloader(params): # our downloader initializator downloader = Downloader(**params[0]) # instantiate our downloader downloader.run(params[1]) # run our downloader return params # job finished, return the same params for identification if __name__ == "__main__": # important protection for cross-platform use downloader_params = [ # Downloaders will be initialized using these params {"port_number": 7751}, {"port_number": 7851}, {"port_number": 7951} ] downloader_cycle = cycle(downloader_params) # use a cycle for round-robin distribution with open("downloaded_links.dat", "a+") as diff_file: # open your diff file diff_file.seek(0) # rewind the diff file to the beginning to capture all lines diff_links = {row.strip() for row in diff_file} # load downloaded links into a set with open("input_links.dat", "r+") as input_file: # open your input file available_links = [] download_jobs = [] # store our downloader parameters + a link here # read our file line by line and filter out downloaded links for row in input_file: # loop through our file link = row.strip() # remove the extra whitespace to get the link if link not in diff_links: # make sure link is not already downloaded available_links.append(row) download_jobs.append([next(downloader_cycle), link]) input_file.seek(0) # rewind our input file input_file.truncate() # clear out the input file input_file.writelines(available_links) # store back the available links diff_file.seek(0) # rewind the diff file diff_file.truncate() # blank out the diff file now that the input is updated # and now let's get to business... if download_jobs: download_pool = Pool(processes=5) # make our pool use 5 processes # run asynchronously so we can capture results as soon as they ar available for response in download_pool.imap_unordered(init_downloader, download_jobs): # since it returns the same parameters, the second item is a link # add the link to our `diff` file so it doesn't get downloaded again diff_file.write(response[1] + "\n") else: print("Nothing left to download...")
The whole idea is, as I wrote in the comment, to use a file to store downloaded links as they get downloaded, and then on the next run to filter out the downloaded links and update the input file. That way even if you forcibly kill it, it will always resume where it left off (except for the partial downloads).
相关问答
更多-
如果你使用的是POSIX的“原始”IO系统调用,比如read(),write(),lseek()等等,你所做的事似乎完全没问题。 如果您使用C stdio(fread(),fwrite()和friends)或其他具有自己的用户空间缓冲的其他语言运行时库,那么“Tilo”的答案是相关的,因为缓冲是由于某些超出您的控制范围,不同的流程可能会覆盖彼此的数据。 Wrt操作系统锁定,而POSIX声明写或读小于PIPE_BUF的大小对于某些特殊文件(管道和FIFO)是原子的,对于普通文件没有这种保证。 实际上,我认为页 ...
-
使用C标准IO设施引入了一层新的复杂性; 该文件仅通过write(2)系列调用系列调用(或内存映射,但在本例中未使用)进行修改 - C标准IO包装可能会延迟写入文件一段时间,并且可能不会提交完整的请求在一次系统调用中。 write(2)调用本身应该表现良好: [...] If the file was open(2)ed with O_APPEND, the file offset is first set to the end of the file before writing. T ...
-
c ++多进程写入同一个文件 - 进程间互斥?(c++ multiple processes writing to the same file - Interprocess mutex?)[2022-09-23]
我的问题是:从多个进程写入文件的最佳方式(或至少是一种有效的方法)是什么? 最好的办法是......不要这样做! 这真的好像是一种日志(追加)。 我只想让每个进程写入自己的文件,然后在需要时合并它们。 至少这是常用的方法,这里是基本原理。 任何类型的进程内锁定都不起作用。 即使在某些操作系统(Windows)关闭后,打开的文件也会在操作系统级别进行缓冲。 如果你想要一个可移植的解决方案(“我希望它可以在任何平台上运行”),你将无法执行文件锁定:你将会遇到可能的性能损失/未定义的行为,这取决于正在使用的文件系 ... -
如果这是从昨天起你的项目的延续,你已经在内存中有你的下载列表 - 只需在他们的进程完成下载时从加载的列表中删除条目,并且只有在你退出''时才在输入文件中写下整个列表。下载”。 没有理由不断写下这些变化。 如果你想知道(比如来自外部进程),即使在你的'下载程序'运行时也会下载url,每次进程返回下载成功时,在downloaded.dat写一个新行。 当然,在这两种情况下,从主进程/线程中写入,这样您就不必担心互斥锁。 更新 - 以下是如何使用与昨天相同的代码库使用其他文件: def init_download ...
-
这与Python完全无关,因为Python中的文件操作使用操作系统级系统调用(除非以root身份运行,否则您的Python程序无权执行原始设备写入并以root身份执行它们将非常愚蠢)。 一些文件系统理论,如果有人关心阅读: 是的,如果您研究文件系统体系结构以及数据实际存储在驱动器上的方式,则文件和目录之间存在相似之处 - 但仅限于数据存储级别。 原因是没有必要将这两者分开。 例如,ext4文件系统具有存储关于文件(元数据)的信息的方法,存储在称为inode的小单元中,以及实际文件本身。 Inode包含指向可 ...
-
我找到了问题的根源。 tab_dist[node] = 0放错位置,应该放在if do_print:语句之前。 现在一切正常。 I have found the source of my problems. The tab_dist[node] = 0 was misplaced and should have been put before the if do_print: statement. All is now working.
-
您正在修改quote中每个quote 的相同 out_quotes变量。 最简单的方法是在同一个for循环中进行print和write ,如下所示: for quote in quotes: print 'ticker: %s' % quote['t'], 'current price: %s' % quote['l_cur'], 'last trade: %s' % quote['lt'] outfile.write(''.join(['ticker: %s ' % quote['t ...
-
在父进程中仅为多个子进程使用一个写入流(Use only one write stream in parent process for multiple child processes)[2023-11-05]
由于未写入结果流,因此需要将end选项设置为false : n.stdout.pipe(strm, {end: false}); n.stderr.pipe(strm, {end: false}); To the resulting stream was not closed for writing you need set the end option to false: n.stdout.pipe(strm, {end: false}); n.stderr.pipe(strm, {end: f ... -
产生多个同名的python进程/守护进程(不同的args)(spawning multiple python processes/daemons of same name (different args))[2022-05-11]
如果你真的不需要一个只有Python的解决方案,那么一个小的shell脚本可以完成这项工作: while read ARGS; do nohup ./subscriber.py $ARGS & done < mylist.txt 其中mylist.txt包含单独行上每个subscriber.py实例的参数。 nohup “daemonizes”任何命令并将其推送到后台。 这意味着当产生该命令的会话结束时,nohup-ed命令将成为init进程的子进程(PID = 1)并继续运行。 If you do ... -
这很简单。 在一个过程中,绑定PULL套接字并打开文件。 每次PULL套接字收到消息时,它都会直接写入文件。 EOF = chr(4) import zmq def file_sink(filename, url): """forward messages on zmq to a file""" socket = zmq.Context.instance().socket(zmq.PULL) socket.bind(url) written = 0 with ope ...