100字范文,内容丰富有趣,生活中的好帮手!
100字范文 > python 任务队列 huey_python的分布式任务huey如何实现异步化任务讲解

python 任务队列 huey_python的分布式任务huey如何实现异步化任务讲解

时间:2023-12-11 21:52:39

相关推荐

python 任务队列 huey_python的分布式任务huey如何实现异步化任务讲解

本文我们来分享一个python的轻型的任务队列程序,他可以让python的分布式任务huey实现异步化任务,感兴趣的朋友可以看看。

一个轻型的任务队列,功能和相关的broker没有celery强大,重在轻型,而且代码读起来也比较的简单。

关于huey的介绍: (比celery轻型,比mrq、rq要好用 !)

a lightweight alternative.

written in python

no deps outside stdlib, except redis (or roll your own backend)

support for django

supports:

multi-threaded task execution

scheduled execution at a given time

periodic execution, like a crontab

retrying tasks that fail

task result storage

安装:

代码如下:

Installing

huey can be installed very easily using pip.

pip install huey

huey has no dependencies outside the standard library, but currently the only fully-implemented queue backend it ships with requires redis. To use the redis backend, you will need to install the python client.

pip install redis

Using git

If you want to run the very latest, feel free to pull down the repo from github and install by hand.

git clone

cd huey

python setup.py install

You can run the tests using the test-runner:

python setup.py test

关于huey的api,下面有详细的介绍及参数介绍的。

代码如下:

from huey import RedisHuey, crontab

huey = RedisHuey('my-app', host='')

@huey.task()

def add_numbers(a, b):

return a + b

@huey.periodic_task(crontab(minute='0', hour='3'))

def nightly_backup():

sync_all_data()

juey作为woker的时候,一些cli参数。

常用的是:

-l 关于日志文件的执行 。

-w workers的数目,-w的数值大了,肯定是增加任务的处理能力

-p --periodic 启动huey worker的时候,他会从tasks.py里面找到 需要crontab的任务,会派出几个线程专门处理这些事情。

-n 不启动关于crontab里面的预周期执行,只有你触发的时候,才会执行周期星期的任务。

--threads 意思你懂的。

1

代码如下:

# 原文:

The following table lists the options available for the consumer as well as their default values.

-l, --logfile

Path to file used for logging. When a file is specified, by default Huey will use a rotating file handler (1MB / chunk) with a maximum of 3 backups. You can attach your own handler (huey.logger) as well. The default loglevel is INFO.

-v, --verbose

Verbose logging (equates to DEBUG level). If no logfile is specified and verbose is set, then the consumer will log to the console. This is very useful for testing/debugging.

-q, --quiet

Only log errors. The default loglevel for the consumer is INFO.

-w, --workers

Number of worker threads, the default is 1 thread but for applications that have many I/O bound tasks, increasing this number may lead to greater throughput.

-p, --periodic

Indicate that this consumer process should start a thread dedicated to enqueueing “periodic” tasks (crontab-like functionality). This defaults to True, so should not need to be specified in practice.

-n, --no-periodic

Indicate that this consumer process should not enqueue periodic tasks.

-d, --delay

When using a “polling”-type queue backend, the amount of time to wait between polling the backend. Default is 0.1 seconds.

-m, --max-delay

The maximum amount of time to wait between polling, if using weighted backoff. Default is 10 seconds.

-b, --backoff

The amount to back-off when polling for results. Must be greater than one. Default is 1.15.

-u, --utc

Indicates that the consumer should use UTC time for all tasks, crontabs and scheduling. Default is True, so in practice you should not need to specify this option.

--localtime

Indicates that the consumer should use localtime for all tasks, crontabs and scheduling. Default is False.

Examples

Running the consumer with 8 threads, a logfile for errors only, and a very short polling interval:

huey_consumer.py my.app.huey -l /var/log/app.huey.log -w 8 -b 1.1 -m 1.0

任务队列huey 是靠着redis来实现queue的任务存储,所以需要咱们提前先把redis-server和redis-py都装好。 安装的方法就不说了,自己搜搜吧。

我们首先创建下huey的链接实例 :

代码如下:

# config.py

from huey import Huey

from huey.backends.redis_backend import RedisBlockingQueue

queue = RedisBlockingQueue('test-queue', host='localhost', port=6379)

huey = Huey(queue)

然后就是关于任务的,也就是你想让谁到任务队列这个圈子里面,和celey、rq,mrq一样,都是用tasks.py表示的。

代码如下:

from config import huey # import the huey we instantiated in config.py

@huey.task()

def count_beans(num):

print '-- counted %s beans --' % num

再来一个真正去执行的 。 main.py 相当于生产者,tasks.py相当于消费者的关系。 main.py负责喂数据。

代码如下:

main.py

from config import huey # import our "huey" object

from tasks import count_beans # import our task

if __name__ == '__main__':

beans = raw_input('How many beans? ')

count_beans(int(beans))

print 'Enqueued job to count %s beans' % beans

Ensure you have Redis running locally

Ensure you have installed huey

Start the consumer: huey_consumer.py main.huey (notice this is “main.huey” and not “config.huey”).

Run the main program: python main.py

和celery、rq一样,他的结果获取是需要在你的config.py或者主代码里面指明他的存储的方式,现在huey还仅仅是支持redis,但相对他的特点和体积,这已经很足够了 !

只是那几句话而已,导入RedisDataStore库,申明下存储的地址。

代码如下:

from huey import Huey

from huey.backends.redis_backend import RedisBlockingQueue

from huey.backends.redis_backend import RedisDataStore # ADD THIS LINE

queue = RedisBlockingQueue('test-queue', host='localhost', port=6379)

result_store = RedisDataStore('results', host='localhost', port=6379) # ADDED

huey = Huey(queue, result_store=result_store) # ADDED result store

这个时候,我们在ipython再次去尝试的时候,会发现可以获取到tasks.py里面的return值了 其实你在main.py里面获取的时候,他还是通过uuid从redis里面取出来的。

代码如下:

>>> from main import count_beans

>>> res = count_beans(100)

>>> res # what is "res" ?

>>> res.get() # get the result of this task

'Counted 100 beans'

huey也是支持celey的延迟执行和crontab的功能 。 这些功能很是重要,可以自定义的优先级或者不用再借助linux本身的crontab。

用法很简单,多加一个delay的时间就行了,看了下huey的源码,他默认是立马执行的。当然还是要看你的线程是否都是待执行的状态了。

代码如下:

>>> import datetime

>>> res = count_beans.schedule(args=(100,), delay=60)

>>> res

>>> res.get() # this returns None, no data is ready

>>> res.get() # still no data...

>>> res.get(blocking=True) # ok, let's just block until its ready

'Counted 100 beans'

更多信息请查看IT技术专栏

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。