首页 \ 问答 \ 如何在PHP当前时间延迟30秒?(How to get delay of 30 sec in current time in php?)

如何在PHP当前时间延迟30秒?(How to get delay of 30 sec in current time in php?)

日期(“F d YH:i:s”)给出了当前时间。 如何使用php在当前时间内延迟30秒和1分钟,不确定但是像Date(“F d YH:i:s-30”)或Date(“F d YH:i-1:s”)之类的东西)?


Date("F d Y H:i:s") gives current time in php. How can I get delay of 30 secs and 1 min in current time using php, not sure but something like Date ("F d Y H:i:s-30") or Date("F d Y H:i-1:s") ?


原文:https://stackoverflow.com/questions/1853516
更新时间:2023-09-27 06:09

最满意答案

如果这是从昨天起你的项目的延续,你已经在内存中有你的下载列表 - 只需在他们的进程完成下载时从加载的列表中删除条目,并且只有在你退出''时才在输入文件中写下整个列表。下载”。 没有理由不断写下这些变化。

如果你想知道(比如来自外部进程),即使在你的'下载程序'运行时也会下载url,每次进程返回下载成功时,在downloaded.dat写一个新行。

当然,在这两种情况下,从主进程/线程中写入,这样您就不必担心互斥锁。

更新 - 以下是如何使用与昨天相同的代码库使用其他文件:

def init_downloader(params):  # our downloader initializator
    downloader = Downloader(**params[0])  # instantiate our downloader
    downloader.run(params[1])  # run our downloader
    return params  # job finished, return the same params for identification

if __name__ == "__main__":  # important protection for cross-platform use

    downloader_params = [  # Downloaders will be initialized using these params
        {"port_number": 7751},
        {"port_number": 7851},
        {"port_number": 7951}
    ]
    downloader_cycle = cycle(downloader_params)  # use a cycle for round-robin distribution

    with open("downloaded_links.dat", "a+") as diff_file:  # open your diff file
        diff_file.seek(0)  # rewind the diff file to the beginning to capture all lines
        diff_links = {row.strip() for row in diff_file}  # load downloaded links into a set
        with open("input_links.dat", "r+") as input_file:  # open your input file
            available_links = []
            download_jobs = []  # store our downloader parameters + a link here
            # read our file line by line and filter out downloaded links
            for row in input_file:  # loop through our file
                link = row.strip()  # remove the extra whitespace to get the link
                if link not in diff_links:  # make sure link is not already downloaded
                    available_links.append(row)
                    download_jobs.append([next(downloader_cycle), link])
            input_file.seek(0)  # rewind our input file
            input_file.truncate()  # clear out the input file
            input_file.writelines(available_links)  # store back the available links
            diff_file.seek(0)  # rewind the diff file
            diff_file.truncate()  # blank out the diff file now that the input is updated
        # and now let's get to business...
        if download_jobs:
            download_pool = Pool(processes=5)  # make our pool use 5 processes
            # run asynchronously so we can capture results as soon as they ar available
            for response in download_pool.imap_unordered(init_downloader, download_jobs):
                # since it returns the same parameters, the second item is a link
                # add the link to our `diff` file so it doesn't get downloaded again
                diff_file.write(response[1] + "\n")
        else:
            print("Nothing left to download...")

正如我在评论中所写的那样,整个想法是在下载链接时使用文件存储下载的链接,然后在下次运行时过滤掉下载的链接并更新输入文件。 这样即使你强行杀死它,它也会一直从它停止的地方恢复(部分下载除外)。


If this is a continuation of your project from yesterday you already have your download list in memory - just remove the entries from the loaded list as their processes finish download and only write down the whole list over the input file once you're exiting the 'downloader'. There is no reason to constantly write down the changes.

If you want to know (say from an external process) when a url gets downloaded even while your 'downloader' is running, write in a downloaded.dat a new line each time a process returns that download was successful.

Of course, in both cases, write from within your main process/thread so you don't have to worry about mutex.

UPDATE - Here's how to do it with an additional file, using the same code base as yesterday:

def init_downloader(params):  # our downloader initializator
    downloader = Downloader(**params[0])  # instantiate our downloader
    downloader.run(params[1])  # run our downloader
    return params  # job finished, return the same params for identification

if __name__ == "__main__":  # important protection for cross-platform use

    downloader_params = [  # Downloaders will be initialized using these params
        {"port_number": 7751},
        {"port_number": 7851},
        {"port_number": 7951}
    ]
    downloader_cycle = cycle(downloader_params)  # use a cycle for round-robin distribution

    with open("downloaded_links.dat", "a+") as diff_file:  # open your diff file
        diff_file.seek(0)  # rewind the diff file to the beginning to capture all lines
        diff_links = {row.strip() for row in diff_file}  # load downloaded links into a set
        with open("input_links.dat", "r+") as input_file:  # open your input file
            available_links = []
            download_jobs = []  # store our downloader parameters + a link here
            # read our file line by line and filter out downloaded links
            for row in input_file:  # loop through our file
                link = row.strip()  # remove the extra whitespace to get the link
                if link not in diff_links:  # make sure link is not already downloaded
                    available_links.append(row)
                    download_jobs.append([next(downloader_cycle), link])
            input_file.seek(0)  # rewind our input file
            input_file.truncate()  # clear out the input file
            input_file.writelines(available_links)  # store back the available links
            diff_file.seek(0)  # rewind the diff file
            diff_file.truncate()  # blank out the diff file now that the input is updated
        # and now let's get to business...
        if download_jobs:
            download_pool = Pool(processes=5)  # make our pool use 5 processes
            # run asynchronously so we can capture results as soon as they ar available
            for response in download_pool.imap_unordered(init_downloader, download_jobs):
                # since it returns the same parameters, the second item is a link
                # add the link to our `diff` file so it doesn't get downloaded again
                diff_file.write(response[1] + "\n")
        else:
            print("Nothing left to download...")

The whole idea is, as I wrote in the comment, to use a file to store downloaded links as they get downloaded, and then on the next run to filter out the downloaded links and update the input file. That way even if you forcibly kill it, it will always resume where it left off (except for the partial downloads).

相关问答

更多
  • 如果你使用的是POSIX的“原始”IO系统调用,比如read(),write(),lseek()等等,你所做的事似乎完全没问题。 如果您使用C stdio(fread(),fwrite()和friends)或其他具有自己的用户空间缓冲的其他语言运行时库,那么“Tilo”的答案是相关的,因为缓冲是由于某些超出您的控制范围,不同的流程可能会覆盖彼此的数据。 Wrt操作系统锁定,而POSIX声明写或读小于PIPE_BUF的大小对于某些特殊文件(管道和FIFO)是原子的,对于普通文件没有这种保证。 实际上,我认为页 ...
  • 使用C标准IO设施引入了一层新的复杂性; 该文件仅通过write(2)系列调用系列调用(或内存映射,但在本例中未使用)进行修改 - C标准IO包装可能会延迟写入文件一段时间,并且可能不会提交完整的请求在一次系统调用中。 write(2)调用本身应该表现良好: [...] If the file was open(2)ed with O_APPEND, the file offset is first set to the end of the file before writing. T ...
  • 我的问题是:从多个进程写入文件的最佳方式(或至少是一种有效的方法)是什么? 最好的办法是......不要这样做! 这真的好像是一种日志(追加)。 我只想让每个进程写入自己的文件,然后在需要时合并它们。 至少这是常用的方法,这里是基本原理。 任何类型的进程内锁定都不起作用。 即使在某些操作系统(Windows)关闭后,打开的文件也会在操作系统级别进行缓冲。 如果你想要一个可移植的解决方案(“我希望它可以在任何平台上运行”),你将无法执行文件锁定:你将会遇到可能的性能损失/未定义的行为,这取决于正在使用的文件系 ...
  • 如果这是从昨天起你的项目的延续,你已经在内存中有你的下载列表 - 只需在他们的进程完成下载时从加载的列表中删除条目,并且只有在你退出''时才在输入文件中写下整个列表。下载”。 没有理由不断写下这些变化。 如果你想知道(比如来自外部进程),即使在你的'下载程序'运行时也会下载url,每次进程返回下载成功时,在downloaded.dat写一个新行。 当然,在这两种情况下,从主进程/线程中写入,这样您就不必担心互斥锁。 更新 - 以下是如何使用与昨天相同的代码库使用其他文件: def init_download ...
  • 这与Python完全无关,因为Python中的文件操作使用操作系统级系统调用(除非以root身份运行,否则您的Python程序无权执行原始设备写入并以root身份执行它们将非常愚蠢)。 一些文件系统理论,如果有人关心阅读: 是的,如果您研究文件系统体系结构以及数据实际存储在驱动器上的方式,则文件和目录之间存在相似之处 - 但仅限于数据存储级别。 原因是没有必要将这两者分开。 例如,ext4文件系统具有存储关于文件(元数据)的信息的方法,存储在称为inode的小单元中,以及实际文件本身。 Inode包含指向可 ...
  • 我找到了问题的根源。 tab_dist[node] = 0放错位置,应该放在if do_print:语句之前。 现在一切正常。 I have found the source of my problems. The tab_dist[node] = 0 was misplaced and should have been put before the if do_print: statement. All is now working.
  • 您正在修改quote中每个quote 的相同 out_quotes变量。 最简单的方法是在同一个for循环中进行print和write ,如下所示: for quote in quotes: print 'ticker: %s' % quote['t'], 'current price: %s' % quote['l_cur'], 'last trade: %s' % quote['lt'] outfile.write(''.join(['ticker: %s ' % quote['t ...
  • 由于未写入结果流,因此需要将end选项设置为false : n.stdout.pipe(strm, {end: false}); n.stderr.pipe(strm, {end: false}); To the resulting stream was not closed for writing you need set the end option to false: n.stdout.pipe(strm, {end: false}); n.stderr.pipe(strm, {end: f ...
  • 如果你真的不需要一个只有Python的解决方案,那么一个小的shell脚本可以完成这项工作: while read ARGS; do nohup ./subscriber.py $ARGS & done < mylist.txt 其中mylist.txt包含单独行上每个subscriber.py实例的参数。 nohup “daemonizes”任何命令并将其推送到后台。 这意味着当产生该命令的会话结束时,nohup-ed命令将成为init进程的子进程(PID = 1)并继续运行。 If you do ...
  • 这很简单。 在一个过程中,绑定PULL套接字并打开文件。 每次PULL套接字收到消息时,它都会直接写入文件。 EOF = chr(4) import zmq def file_sink(filename, url): """forward messages on zmq to a file""" socket = zmq.Context.instance().socket(zmq.PULL) socket.bind(url) written = 0 with ope ...

相关文章

更多

最新问答

更多
  • Runnable上的NetworkOnMainThreadException(NetworkOnMainThreadException on Runnable)
  • C ++ 11 + SDL2 + Windows:多线程程序在任何输入事件后挂起(C++11 + SDL2 + Windows: Multithreaded program hangs after any input event)
  • AccessViolationException未处理[VB.Net] [Emgucv](AccessViolationException was unhandled [VB.Net] [Emgucv])
  • 计算时间和日期差异(Calculating Time and Date difference)
  • 以编程方式标签NSMutableAttributedString swift 4(Label NSMutableAttributedString programmatically swift 4)
  • C#对象和代码示例(C# objects and code examples)
  • 在python中是否有数学nCr函数?(Is there a math nCr function in python? [duplicate])
  • 检索R中列的最大值和第二个最大值的行名(Retrieve row names of maximum and second maximum values of a column in R)
  • 给定md5哈希时如何查找特定文件(How to find specific file when given md5 Hash)
  • Python字典因某些原因引发KeyError(Python Dictionary Throwing KeyError for Some Reason)
  • 如何让Joomla停止打开新标签中的每个链接?(How do I get Joomla to stop opening every link in a new tab?)
  • DNS服务器上的NS记录不匹配(Mismatched NS records at DNS server)
  • Python屏幕捕获错误(Python screen capture error)
  • 如何在帧集上放置div叠加?(How to put a div overlay over framesets?)
  • 页面刷新后是否可以保留表单(html)内容数据?(Is it possible to retain the form(html) content data after page refreshed?)
  • 使用iTeardownMyAppFrame和iStartMyAppInAFrame在OPA5测试中重新启动应用程序超时(Restart app within OPA5 test using iTeardownMyAppFrame and iStartMyAppInAFrame timed out)
  • 自动拆分文本内容到列(Automatically splitting text content into even columns)
  • 在r中的循环中将模型名称分配给gbm.step(assigning model names to gbm.step in loop in r)
  • 昆明哪里有电脑等级考试二级C培训?
  • C ++模板实例化,究竟是什么意思?(C++ template instantiation, what exactly does it mean?)
  • 帮助渲染来自fields_for的部分内容(Help to render a partial from fields_for)
  • 将url.action作为json对象返回mvc(return url.action as json object mvc)
  • 使用.BAT中的.application文件类型运行ac#Console App(Run a c# Console App with .application file type from a .BAT)
  • 将bindingRedirect添加到.Net标准库(Adding a bindingRedirect to a .Net Standard library)
  • Laravel版本升级会影响您的控制器吗?(Laravel version upgrade affects your controller?)
  • imaplib.error:命令SEARCH在状态AUTH中非法,只允许在SELECTED状态(imaplib.error: command SEARCH illegal in state AUTH, only allowed in states SELECTED)
  • 如何在eclipse debug impala前端
  • 如何通过Ajax API处理多个请求?(How to handle multiple requests through an Ajax API? [closed])
  • 使用Datetime索引来分析数据框数据(Using Datetime indexing to analyse dataframe data)
  • JS 实现一个菜单效果