Stream Framework (previously Feedly)

Build Status

Note

This project was previously named Feedly. As requested by feedly.com we have now renamed the project to Stream Framework. You can find more details about the name change on the blog.

What can you build?

Stream Framework allows you to build newsfeed and notification systems using Cassandra and/or Redis. Examples of what you can build are the Facebook newsfeed, your Twitter stream or your Pinterest following page. We’ve built Feedly for Fashiolista where it powers the flat feed, aggregated feed and the notification system. (Feeds are also commonly called: Activity Streams, activity feeds, news streams.)

To quickly make you acquainted with Stream Framework, we’ve created a Pinterest like example application, you can find it here

GetStream.io

Stream Framework’s authors also offer a Saas solution for building feed systems at getstream.io The hosted service is highly optimized and allows you start building your application immediatly. It saves you the hastle of maintaining Cassandra, Redis, Faye, RabbitMQ and Celery workers. Clients are available for Node, Ruby, Python, Java and PHP

Consultancy

For Stream Framework and GetStream.io consultancy please contact thierry at getstream.io

Authors

  • Thierry Schellenbach (thierry at getstream.io)
  • Tommaso Barbugli (tommaso at getstream.io)
  • Guyon Morée

Resources

Tutorials

Using Stream Framework

This quick example will show you how to publish a Pin to all your followers. So lets create an activity for the item you just pinned.

def create_activity(pin):
    from stream_framework.activity import Activity
    activity = Activity(
        pin.user_id,
        PinVerb,
        pin.id,
        pin.influencer_id,
        time=make_naive(pin.created_at, pytz.utc),
        extra_context=dict(item_id=pin.item_id)
    )
    return activity

Next up we want to start publishing this activity on several feeds. First of we want to insert it into your personal feed, and secondly into the feeds of all your followers. Lets start first by defining these feeds.

# setting up the feeds

from stream_framework.feeds.redis import RedisFeed


class PinFeed(RedisFeed):
    key_format = 'feed:normal:%(user_id)s'

class UserPinFeed(PinFeed):
    key_format = 'feed:user:%(user_id)s'

Writing to these feeds is very simple. For instance to write to the feed of user 13 one would do

feed = UserPinFeed(13)
feed.add(activity)

But we don’t want to publish to just one users feed. We want to publish to the feeds of all users which follow you. This action is called a fanout and is abstracted away in the manager class. We need to subclass the Manager class and tell it how we can figure out which user follow us.

from stream_framework.feed_managers.base import Manager


class PinManager(Manager):
    feed_classes = dict(
        normal=PinFeed,
    )
    user_feed_class = UserPinFeed

    def add_pin(self, pin):
        activity = pin.create_activity()
        # add user activity adds it to the user feed, and starts the fanout
        self.add_user_activity(pin.user_id, activity)

    def get_user_follower_ids(self, user_id):
        ids = Follow.objects.filter(target=user_id).values_list('user_id', flat=True)
        return {FanoutPriority.HIGH:ids}

manager = PinManager()

Now that the manager class is setup broadcasting a pin becomes as easy as

manager.add_pin(pin)

Calling this method wil insert the pin into your personal feed and into all the feeds of users which follow you. It does so by spawning many small tasks via Celery. In Django (or any other framework) you can now show the users feed.

# django example

@login_required
def feed(request):
    '''
    Items pinned by the people you follow
    '''
    context = RequestContext(request)
    feed = manager.get_feeds(request.user.id)['normal']
    activities = list(feed[:25])
    context['activities'] = activities
    response = render_to_response('core/feed.html', context)
    return response

This example only briefly covered how Stream Framework works. The full explanation can be found on read the docs.

Features

Stream Framework uses celery and Redis/Cassandra to build a system with heavy writes and extremely light reads. It features:

  • Asynchronous tasks (All the heavy lifting happens in the background, your users don’t wait for it)
  • Reusable components (You will need to make tradeoffs based on your use cases, Stream Framework doesnt get in your way)
  • Full Cassandra and Redis support
  • The Cassandra storage uses the new CQL3 and Python-Driver packages, which give you access to the latest Cassandra features.
  • Built for the extremely performant Cassandra 2.0

Background Articles

A lot has been written about the best approaches to building feed based systems. Here’s a collection on some of the talks:

Twitter 2013 Redis based, database fallback, very similar to Fashiolista’s old approach.

Etsy feed scaling (Gearman, separate scoring and aggregation steps, rollups - aggregation part two)

Facebook history

Django project with good naming conventions

Activity stream specification

Quora post on best practises

Quora scaling a social network feed

Redis ruby example

FriendFeed approach

Thoonk setup

Yahoo Research Paper

Twitter’s approach

Cassandra at Instagram

Documentation

Installation

Installation is easy using pip both redis and cassandra dependencies are installed by the setup.

$ pip install Stream-Framework

or get it from source

$ git clone https://github.com/tschellenbach/Stream-Framework.git
$ cd Stream-Framework
$ python setup.py install

Depending on the backend you are going to use ( Choosing a storage layer ) you will need to have the backend server up and running.

Feed setup

A feed object contains activities. The example below shows you how to setup two feeds:

# implement your feed with redis as storage

from stream_framework.feeds.redis import RedisFeed

class PinFeed(RedisFeed):
    key_format = 'feed:normal:%(user_id)s'

class UserPinFeed(PinFeed):
    key_format = 'feed:user:%(user_id)s'

Next up we need to hook up the Feeds to your Manager class. The Manager class knows how to fanout new activities to the feeds of all your followers.

from stream_framework.feed_managers.base import Manager


class PinManager(Manager):
    feed_classes = dict(
        normal=PinFeed,
    )
    user_feed_class = UserPinFeed

    def add_pin(self, pin):
        activity = pin.create_activity()
        # add user activity adds it to the user feed, and starts the fanout
        self.add_user_activity(pin.user_id, activity)

    def get_user_follower_ids(self, user_id):
        ids = Follow.objects.filter(target=user_id).values_list('user_id', flat=True)
        return {FanoutPriority.HIGH:ids}

manager = PinManager()

Adding data

You can add an Activity object to the feed using the add or add_many instructions.

feed = UserPinFeed(13)
feed.add(activity)

# add many example
feed.add_many([activity])

What’s an activity

The activity object is best described using an example. For Pinterest for instance a common activity would look like this:

Thierry added an item to his board Surf Girls.

In terms of the activity object this would translate to:

Activity(
        actor=13, # Thierry's user id
        verb=1, # The id associated with the Pin verb
        object=1, # The id of the newly created Pin object
        target=1, # The id of the Surf Girls board
        time=datetime.utcnow(), # The time the activity occured
)

The names for these fields are based on the activity stream spec.

Verbs

Adding new verbs

Registering a new verb is quite easy. Just subclass the Verb class and give it a unique id.

from stream_framework.verbs import register
from stream_framework.verbs.base import Verb


class Pin(Verb):
    id = 5
    infinitive = 'pin'
    past_tense = 'pinned'

register(Pin)

See also

Make sure your verbs are registered before you read data from stream_framework, if you use django you can just define/import them in models.py to make sure they are loaded early

Getting verbs

You can retrieve verbs by calling get_verb_by_id.

from stream_framework.verbs import get_verb_by_id

pin_verb = get_verb_by_id(5)

Querying feeds

You can query the feed using Python slicing. In addition you can order and filter the feed on several predefined fields. Examples are shown below

Slicing:

feed = RedisFeed(13)
activities = feed[:10]

Filtering and Pagination:

feed.filter(activity_id__gte=1)[:10]
feed.filter(activity_id__lte=1)[:10]
feed.filter(activity_id__gt=1)[:10]
feed.filter(activity_id__lt=1)[:10]

Ordering feeds

New in version 0.10.0.

This is only supported using Cassandra and Redis at the moment.

feed.order_by('activity_id')
feed.order_by('-activity_id')

Settings

Note

Settings currently only support Django settings. To add support for Flask or other frameworks simply change stream_framework.settings.py

Redis Settings

STREAM_REDIS_CONFIG

The settings for redis, keep here the list of redis servers you want to use as feed storage

Defaults to

STREAM_REDIS_CONFIG = {
    'default': {
        'host': '127.0.0.1',
        'port': 6379,
        'db': 0,
        'password': None
    },
}

Cassandra Settings

STREAM_CASSANDRA_HOSTS

The list of nodes that are part of the cassandra cluster.

Note

You dont need to put every node of the cluster, cassandra-driver has built-in node discovery

Defaults to ['localhost']

STREAM_DEFAULT_KEYSPACE

The cassandra keyspace where feed data is stored

Defaults to stream_framework

STREAM_CASSANDRA_CONSISTENCY_LEVEL

The consistency level used for both reads and writes to the cassandra cluster.

Defaults to cassandra.ConsistencyLevel.ONE

CASSANDRA_DRIVER_KWARGS

Extra keyword arguments sent to cassandra driver (see http://datastax.github.io/python-driver/_modules/cassandra/cluster.html#Cluster)

Defaults to {}

Metric Settings

STREAM_METRIC_CLASS

The metric class that will be used to collect feeds metrics.

Note

The default metric class is not collecting any metric and should be used as example for subclasses

Defaults to stream_framework.metrics.base.Metrics

STREAM_METRICS_OPTIONS

A dictionary with options to send to the metric class at initialisation time.

Defaults to {}

Metrics

Stream Framework collects metrics regarding feed operations. The default behaviour is to ignore collected metrics rather than sending them anywhere.

You can configure the metric class with the STREAM_METRIC_CLASS setting and send options as a python dict via STREAM_METRICS_OPTIONS

Sending metrics to Statsd

Stream Framework comes with support for StatsD support, both statsd and python-statsd libraries are supported.

If you use statsd you should use this metric class stream_framework.metrics.statsd.StatsdMetrics while if you are a user of python-statsd you should use stream_framework.metrics.python_statsd.StatsdMetrics.

The two libraries do the same job and both are suitable for production use.

By default this two classes send metrics to localhost which is probably not what you want.

In real life you will need something like this

STREAM_METRICS_OPTIONS = {
    'host': 'my.statsd.host.tld',
    'port': 8125,
    'prefix': 'stream'
}

Custom metric classes

If you need to send metrics to a not supported backend somewhere you only need to create your own subclass of stream_framework.metrics.base.Metrics and configure your application to use it.

Testing Stream Framework

Warning

We strongly suggest against running tests on a machine that is hosting redis or cassandra production data!

In order to test Stream Framework you need to install its test requirements with

python setup.py test

or if you want more control on the test run you can use py.test entry point directly ( assuming you are in stream_framework dir )

py.test stream_framework/tests

The test suite connects to Redis on 127.0.0.1:6379 and to a Cassandra node on 127.0.0.1 using the native protocol.

The easiest way to run a cassandra test cluster is using the awesome ccm package

If you are not running a cassandra node on localhost you can specify a different address with the TEST_CASSANDRA_HOST environment variable

Every commit is built on Travis CI, you can see the current state and the build history here.

If you intend to contribute we suggest you to install pytest’s coverage plugin, this way you can make sure your code changes run during tests.

Support

If you need help you can try IRC or the mailing list. Issues can be reported on Github.

Activity class

Activity is the core data in Stream Framework; their implementation follows the activitystream schema specification. An activity in Stream Framework is composed by an actor, a verb and an object; for example: “Geraldine posted a photo”. The data stored in activities can be extended if necessary; depending on how you use Stream Framework you might want to store some extra information or not. Here is a few good rule of thumbs to follow in case you are not sure wether some information should be stored in Stream Framework:

Good choice:

  1. Add a field used to perform aggregation (eg. object category)
  2. You want to keep every piece of information needed to work with activities in Stream Framework (eg. avoid database lookups)

Bad choice:

  1. The data stored in the activity gets updated
  2. The data requires lot of storage

Activity storage strategies

Activities are stored on Stream Framework trying to maximise the benefits of the storage backend used.

When using the redis backend Stream Framework will keep data denormalized; activities are stored in a special storage (activity storage) and user feeds only keeps a reference (activity_id / serialization_id). This allow Stream Framework to keep the (expensive) memory usage as low as possible.

When using Cassandra as storage Stream Framework will denormalize activities; there is not an activity storage but instead every user feed will keep the complete activity. Doing so allow Stream Framework to minimise the amount of Cassandra nodes to query when retrieving data or writing to feeds.

In both storages activities are always stored in feeds sorted by their creation time (aka Activity.serialization_id).

Extend the activity class

New in version 0.10.0.

You can subclass the activity model to add your own methods. After you’ve created your own activity model you need to hook it up to the feed. An example follows below

from stream_framework.activity import Activity

# subclass the activity object
class CustomActivity(Activity):
        def mymethod():
                pass

# hookup the custom activity object to the Redis feed
class CustomFeed(RedisFeed):
        activity_class = CustomActivity

For aggregated feeds you can customize both the activity and the aggregated activity object. You can give this a try like this

from stream_framework.activity import AggregatedActivity

# define the custom aggregated activity
class CustomAggregated(AggregatedActivity):
    pass

# hook the custom classes up to the feed
class RedisCustomAggregatedFeed(RedisAggregatedFeed):
    activity_class = CustomActivity
    aggregated_activity_class = CustomAggregated

Activity serialization

Activity order and uniqueness

Aggregated activities

Choosing a storage layer

Currently Stream Framework supports both Cassandra and Redis as storage backends.

Summary

Redis is super easy to get started with and works fine for smaller use cases. If you’re just getting started use Redis. When your data requirements become larger though it becomes really expensive to store all the data in Redis. For larger use cases we therefor recommend Cassandra.

Redis (2.7 or newer)

PROS:

  • Easy to install
  • Super reliable
  • Easy to maintain
  • Very fast

CONS:

  • Expensive memory only storage
  • Manual sharding

Redis stores its complete dataset in memory. This makes sure that all operations are always fast. It does however mean that you might need a lot of storage.

A common approach is therefor to use Redis storage for some of your feeds and fall back to your database for less frequently requested data.

Twitter currently uses this approach and Fashiolista has used a system like this in the first half of 2013.

The great benefit of using Redis comes in easy of install, reliability and maintainability. Basically it just works and there’s little you need to learn to maintain it.

Redis doesn’t support any form of cross machine distribution. So if you add a new node to your cluster you need to manual move or recreate the data.

In conclusion I believe Redis is your best bet if you can fallback to the database when needed.

Cassandra (2.0 or newer)

PROS:

  • Stores to disk
  • Automatic sharding across nodes
  • Awesome monitoring tools (opscenter)

CONS:

  • Not as easy to setup
  • Hard to maintain

Cassandra stores data to both disk and memory. Instagram has recently switched from Redis to Cassandra. Storing data to disk can potentially be a big cost saving.

In addition adding new machines to your Cassandra cluster is a breeze. Cassandra will automatically distribute the data to new machines.

If you are using amazon EC2 we suggest you to try Datastax’s easy AMI to get started on AWS.

Background Tasks with celery

Stream Framework uses celery to do the heavy fanout write operations in the background.

We really suggest you to have a look at celery documentation if you are not familiar with the project.

Fanout

When an activity is added Stream Framework will perform a fanout to all subscribed feeds. The base Stream Framework manager spawns one celery fanout task every 100 feeds. Change the value of fanout_chunk_size of your manager if you think this number is too low/high for you.

Few things to keep in mind when doing so:

  1. really high values leads to a mix of heavy tasks and light tasks (not good!)
  2. publishing and consuming tasks introduce some overhead, dont spawn too many tasks
  3. Stream Framework writes data in batches, thats a really good optimization you want to keep
  4. huge tasks have more chances to timeout

Note

When developing you can run fanouts without celery by setting CELERY_ALWAYS_EAGER = True

Prioritise fanouts

Stream Framework partition fanout tasks in two priority groups. Fanouts with different priorities do exactly the same operations (adding/removing activities from/to a feed) the substantial difference is that they get published to different queues for processing. Going back to our pinterest example app, you can use priorities to associate more resources to fanouts that target active users and send the ones for inactive users to a different cluster of workers. This also make it easier and cheaper to keep active users’ feeds updated during activity spikes because you dont need to scale up capacity less often.

Stream Framework manager is the best place to implement your high/low priority fanouts, in fact the get_follower_ids method is required to return the feed ids grouped by priority.

eg:

class MyStreamManager(Manager):

    def get_user_follower_ids(self, user_id):
        follower_ids = {
                FanoutPriority.HIGH: get_follower_ids(user_id, active=True),
                FanoutPriority.LOW: get_follower_ids(user_id, active=False)
        }
        return follower_ids

Celery and Django

If this is the time you use Celery and Django together I suggest you should follow this document’s instructions.

It will guide you through the required steps to get Celery background processing up and running.

Using other job queue libraries

As of today background processing is tied to celery.

While we are not planning to support different queue jobs libraries in the near future using something different than celery should be quite easy and can be mostly done subclassing the feeds manager.

Tutorial: building a notification feed

Note

We are still improving this tutorial. In its current state it might be a bit hard to follow.

What is a notification system?

Building a scalable notification system is almost entirely identical to building an activity feed. From the user’s perspective the functionality is pretty different. A notification system commonly shows activity related to your account. Whereas an activity stream shows activity by the people you follow. Examples of Fashiolista’s notification system and Facebook’s system are shown below. Fashiolista’s system is running on Stream Framework.

_images/notification_system.png _images/fb_notification_system.png

It looks very different from an activity stream, but the technical implementation is almost identical. Only the Feed manager class is different since the notification system has no fanouts.

Note

Remember, Fanout is the process which pushes a little bit of data to all of your followers in many small and asynchronous tasks.

Tutorial

For this tutorial we’ll show you how to customize and setup your own notification system.

Step 1 - Subclass NotificationFeed

As a first step we’ll subclass NotificationFeed and customize the storage location and the aggregator.

from stream_framework.feeds.aggregated_feed.notification_feed import RedisNotificationFeed

class MyNotificationFeed(RedisNotificationFeed):
    # : they key format determines where the data gets stored
    key_format = 'feed:notification:%(user_id)s'

    # : the aggregator controls how the activities get aggregated
    aggregator_class = MyAggregator

Step 2 - Subclass the aggregator

Secondly we want to customize how activities get grouped together. Most notification systems need to aggregate activities. In this case we’ll aggregate on verb and date. So the aggregations will show something like (thierry, peter and two other people liked your photo).

class MyAggregator(BaseAggregator):
    '''
    Aggregates based on the same verb and same time period
    '''
    def get_group(self, activity):
        '''
        Returns a group based on the day and verb
        '''
        verb = activity.verb.id
        date = activity.time.date()
        group = '%s-%s' % (verb, date)
        return group

Step 3 - Test adding data

The aggregated feed uses the same API as the flat feed. You can simply add items by calling feed.add or feed.add_many. An example for inserting data is shown below:

feed = MyNotificationFeed(user_id)
activity = Activity(
    user_id, LoveVerb, object_id, influencer_id, time=created_at,
    extra_context=dict(entity_id=self.entity_id)
)
feed.add(activity)
print feed[:5]

Step 4 - Implement manager functionality

To keep our code clean we’ll implement a very simple manager class to abstract away the above code.

class MyNotification(object):
    '''
    Abstract the access to the notification feed
    '''
    def add_love(self, love):
        feed = MyNotificationFeed(user_id)
        activity = Activity(
            love.user_id, LoveVerb, love.id, love.influencer_id,
            time=love.created_at, extra_context=dict(entity_id=self.entity_id)
        )
        feed.add(activity)

Stream Framework Design

The first approach

A first feed solution usually looks something like this:

SELECT * FROM tweets
JOIN follow ON (follow.target_id = tweet.user_id)
WHERE follow.user_id = 13

This works in the beginning, and with a well tuned database will keep on working nicely for quite some time. However at some point the load becomes too much and this approach falls apart. Unfortunately it’s very hard to split up the tweets in a meaningfull way. You could split it up by date or user, but every query will still hit many of your shards. Eventually this system collapses, read more about this in Facebook’s presentation.

Push or Push/Pull

In general there are two similar solutions to this problem.

In the push approach you publish your activity (ie a tweet on twitter) to all of your followers. So basically you create a small list per user to which you insert the activities created by the people they follow. This involves a huge number of writes, but reads are really fast they can easily be sharded.

For the push/pull approach you implement the push based systems for a subset of your users. At Fashiolista for instance we used to have a push based approach for active users. For inactive users we only kept a small feed and eventually used a fallback to the database when we ran out of results.

Stream Framework

Stream Framework allows you to easily use Cassndra/Redis and Celery (an awesome task broker) to build infinitely scalable feeds. The high level functionality is located in 4 classes.

  • Activities
  • Feeds
  • Feed managers
  • Aggregators

Activities are the blocks of content which are stored in a feed. It follows the nomenclatura from the [activity stream spec] [astream] [astream]: http://activitystrea.ms/specs/atom/1.0/#activity.summary Every activity therefor stores at least:

  • Time (the time of the activity)
  • Verb (the action, ie loved, liked, followed)
  • Actor (the user id doing the action)
  • Object (the object the action is related to)
  • Extra context (Used for whatever else you need to store at the activity level)

Optionally you can also add a target (which is best explained in the activity docs)

Feeds are sorted containers of activities. You can easily add and remove activities from them.

Stream Framework classes (feed managers) handle the logic used in addressing the feed objects. They handle the complex bits of fanning out to all your followers when you create a new object (such as a tweet).

In addition there are several utility classes which you will encounter

  • Serializers (classes handling serialization of Activity objects)
  • Aggregators (utility classes for creating smart/computed feeds based on algorithms)
  • Timeline Storage (cassandra or redis specific storage functions for sorted storage)
  • Activity Storage (cassandra or redis specific storage for hash/dict based storage)

Cassandra storage backend

This document is specific to the Cassandra backend.

Create keyspace and columnfamilies

Keyspace and columnfamilies for your feeds can be created via cqlengine’s sync_table.

from myapp.feeds import MyCassandraFeed
from cqlengine.management import sync_table

timeline = MyCassandraFeed.get_timeline_storage()
sync_table(timeline.model)

sync_table can also create missing columns but it will never delete removed columns.

Use a custom activity model

Since the Cassandra backend is using CQL3 column families, activities have a predefined schema. Cqlengine is used to read/write data from and to Cassandra.

from stream_framework.storage.cassandra import models


class MyCustomActivity(models.Activity)
    actor = columns.Bytes(required=False)


class MySuperAwesomeFeed(CassandraFeed):
    timeline_model = MyCustomActivity

Remember to resync your column family when you add new columns (see above).

API Docs

Stream Framework API Docs

activity Module

class stream_framework.activity.Activity(actor, verb, object, target=None, time=None, extra_context=None)[source]

Bases: stream_framework.activity.BaseActivity

Wrapper class for storing activities Note

actor_id target_id and object_id are always present

actor, target and object are lazy by default

get_dehydrated()[source]

returns the dehydrated version of the current activity

serialization_id

serialization_id is used to keep items locally sorted and unique (eg. used redis sorted sets’ score or cassandra column names)

serialization_id is also used to select random activities from the feed (eg. remove activities from feeds must be fast operation) for this reason the serialization_id should be unique and not change over time

eg: activity.serialization_id = 1373266755000000000042008 1373266755000 activity creation time as epoch with millisecond resolution 0000000000042 activity left padded object_id (10 digits) 008 left padded activity verb id (3 digits)

Returns:int –the serialization id
class stream_framework.activity.AggregatedActivity(group, activities=None, created_at=None, updated_at=None)[source]

Bases: stream_framework.activity.BaseActivity

Object to store aggregated activities

activity_count

Returns the number of activities

activity_ids

Returns a list of activity ids

actor_count

Returns a count of the number of actors When dealing with large lists only approximate the number of actors

actor_ids
append(activity)[source]
contains(activity)[source]

Checks if activity is present in this aggregated

get_dehydrated()[source]

returns the dehydrated version of the current activity

get_hydrated(activities)[source]

expects activities to be a dict like this {‘activity_id’: Activity}

is_read()[source]

Returns if the activity should be considered as seen at this moment

is_seen()[source]

Returns if the activity should be considered as seen at this moment

last_activities
last_activity
max_aggregated_activities_length = 15
object_ids
other_actor_count
remove(activity)[source]
remove_many(activities)[source]
serialization_id

serialization_id is used to keep items locally sorted and unique (eg. used redis sorted sets’ score or cassandra column names)

serialization_id is also used to select random activities from the feed (eg. remove activities from feeds must be fast operation) for this reason the serialization_id should be unique and not change over time

eg: activity.serialization_id = 1373266755000000000042008 1373266755000 activity creation time as epoch with millisecond resolution 0000000000042 activity left padded object_id (10 digits) 008 left padded activity verb id (3 digits)

Returns:int –the serialization id
update_read_at()[source]

A hook method that updates the read_at to current date

update_seen_at()[source]

A hook method that updates the seen_at to current date

verb
verbs
class stream_framework.activity.BaseActivity[source]

Bases: object

Common parent class for Activity and Aggregated Activity Check for this if you want to see if something is an activity

class stream_framework.activity.DehydratedActivity(serialization_id)[source]

Bases: stream_framework.activity.BaseActivity

The dehydrated verions of an Activity. the only data stored is serialization_id of the original

Serializers can store this instead of the full activity Feed classes

get_hydrated(activities)[source]

returns the full hydrated Activity from activities

Parameters:a dict {'activity_id' (activities) – Activity}
class stream_framework.activity.NotificationActivity(*args, **kwargs)[source]

Bases: stream_framework.activity.AggregatedActivity

default_settings Module

exceptions Module

exception stream_framework.exceptions.ActivityNotFound[source]

Bases: exceptions.Exception

Raised when the activity is not present in the aggregated Activity

exception stream_framework.exceptions.DuplicateActivityException[source]

Bases: exceptions.Exception

Raised when someone sticks a duplicate activity in the aggregated activity

exception stream_framework.exceptions.SerializationException[source]

Bases: exceptions.Exception

Raised when encountering invalid data for serialization

settings Module

stream_framework.settings.import_global_module(module, current_locals, current_globals, exceptions=None)[source]

Import the requested module into the global scope Warning! This will import your module into the global scope

Example:
from django.conf import settings import_global_module(settings, locals(), globals())
Parameters:
  • module – the module which to import into global scope
  • current_locals – the local globals
  • current_globals – the current globals
  • exceptions – the exceptions which to ignore while importing

tasks Module

utils Module

class stream_framework.utils.LRUCache(capacity)[source]
get(key)[source]
set(key, value)[source]
stream_framework.utils.chunks(iterable, n=10000)[source]
stream_framework.utils.datetime_to_epoch(dt)[source]

Convert datetime object to epoch with millisecond accuracy

stream_framework.utils.epoch_to_datetime(time_)[source]
stream_framework.utils.get_class_from_string(path, default=None)[source]

Return the class specified by the string.

stream_framework.utils.get_metrics_instance()[source]

Returns an instance of the metric class as defined in stream_framework settings.

stream_framework.utils.make_list_unique(sequence, marker_function=None)[source]

Makes items in a list unique Performance based on this blog post: http://www.peterbe.com/plog/uniqifiers-benchmark

class stream_framework.utils.memoized(func)[source]

Bases: object

Decorator. Caches a function’s return value each time it is called. If called later with the same arguments, the cached value is returned (not reevaluated).

stream_framework.utils.warn_on_duplicate(f)[source]
stream_framework.utils.warn_on_error(f, exceptions)[source]

Subpackages

aggregators Package

base Module
class stream_framework.aggregators.base.BaseAggregator(aggregated_activity_class=None, activity_class=None)[source]

Bases: object

Aggregators implement the combining of multiple activities into aggregated activities.

The two most important methods are aggregate and merge

Aggregate takes a list of activities and turns it into a list of aggregated activities

Merge takes two lists of aggregated activities and returns a list of new and changed aggregated activities

activity_class

alias of Activity

aggregate(activities)[source]
Parameters:activties – A list of activities
Returns list:A list of aggregated activities

Runs the group activities (using get group) Ranks them using the giving ranking function And returns the sorted activities

Example

aggregator = ModulusAggregator()
activities = [Activity(1), Activity(2)]
aggregated_activities = aggregator.aggregate(activities)
aggregated_activity_class

alias of AggregatedActivity

get_group(activity)[source]

Returns a group to stick this activity in

group_activities(activities)[source]

Groups the activities based on their group Found by running get_group(actvity on them)

merge(aggregated, activities)[source]
Parameters:
  • aggregated – A list of aggregated activities
  • activities – A list of the new activities
Returns tuple:

Returns new, changed

Merges two lists of aggregated activities and returns the new aggregated activities and a from, to mapping of the changed aggregated activities

Example

aggregator = ModulusAggregator()
activities = [Activity(1), Activity(2)]
aggregated_activities = aggregator.aggregate(activities)
activities = [Activity(3), Activity(4)]
new, changed = aggregator.merge(aggregated_activities, activities)
for activity in new:
    print activity

for from, to in changed:
    print 'changed from %s to %s' % (from, to)
rank(aggregated_activities)[source]

The ranking logic, for sorting aggregated activities

class stream_framework.aggregators.base.NotificationAggregator(aggregated_activity_class=None, activity_class=None)[source]

Bases: stream_framework.aggregators.base.RecentRankMixin, stream_framework.aggregators.base.BaseAggregator

Aggregates based on the same verb, object and day

get_group(activity)[source]

Returns a group based on the verb, object and day

class stream_framework.aggregators.base.RecentRankMixin[source]

Bases: object

Most recently updated aggregated activities are ranked first.

rank(aggregated_activities)[source]

The ranking logic, for sorting aggregated activities

class stream_framework.aggregators.base.RecentVerbAggregator(aggregated_activity_class=None, activity_class=None)[source]

Bases: stream_framework.aggregators.base.RecentRankMixin, stream_framework.aggregators.base.BaseAggregator

Aggregates based on the same verb and same time period

get_group(activity)[source]

Returns a group based on the day and verb

feed_managers Package

base Module

feeds Package

base Module
class stream_framework.feeds.base.BaseFeed(user_id)[source]

Bases: object

The feed class allows you to add and remove activities from a feed. Please find below a quick usage example.

Usage Example:

feed = BaseFeed(user_id)
# start by adding some existing activities to a feed
feed.add_many([activities])
# querying results
results = feed[:10]
# removing activities
feed.remove_many([activities])
# counting the number of items in the feed
count = feed.count()
feed.delete()

The feed is easy to subclass. Commonly you’ll want to change the max_length and the key_format.

Subclassing:

class MyFeed(BaseFeed):
    key_format = 'user_feed:%(user_id)s'
    max_length = 1000

Filtering and Pagination:

feed.filter(activity_id__gte=1)[:10]
feed.filter(activity_id__lte=1)[:10]
feed.filter(activity_id__gt=1)[:10]
feed.filter(activity_id__lt=1)[:10]

Activity storage and Timeline storage

To keep reduce timelines memory utilization the BaseFeed supports normalization of activity data.

The full activity data is stored only in the activity_storage while the timeline only keeps a activity references (refered as activity_id in the code)

For this reason when an activity is created it must be stored in the activity_storage before other timelines can refer to it

eg.

feed = BaseFeed(user_id)
feed.insert_activity(activity)
follower_feed = BaseFeed(follower_user_id)
feed.add(activity)

It is also possible to store the full data in the timeline storage

The strategy used by the BaseFeed depends on the serializer utilized by the timeline_storage

When activities are stored as dehydrated (just references) the BaseFeed will query the activity_storage to return full activities

eg.

feed = BaseFeed(user_id)
feed[:10]

gets the first 10 activities from the timeline_storage, if the results are not complete activities then the BaseFeed will hydrate them via the activity_storage

activity_class

alias of Activity

activity_serializer

alias of BaseSerializer

activity_storage_class

alias of BaseActivityStorage

add(activity, *args, **kwargs)[source]
add_many(activities, batch_interface=None, trim=True, *args, **kwargs)[source]

Add many activities

Parameters:
  • activities – a list of activities
  • batch_interface – the batch interface
count()[source]

Count the number of items in the feed

delete()[source]

Delete the entire feed

filter(**kwargs)[source]

Filter based on the kwargs given, uses django orm like syntax

Example ::
# filter between 100 and 200 feed = feed.filter(activity_id__gte=100) feed = feed.filter(activity_id__lte=200) # the same statement but in one step feed = feed.filter(activity_id__gte=100, activity_id__lte=200)
filtering_supported = False
classmethod flush()[source]
get_activity_slice(start=None, stop=None, rehydrate=True)[source]

Gets activity_ids from timeline_storage and then loads the actual data querying the activity_storage

classmethod get_activity_storage()[source]

Returns an instance of the activity storage

classmethod get_timeline_batch_interface()[source]
classmethod get_timeline_storage()[source]

Returns an instance of the timeline storage

classmethod get_timeline_storage_options()[source]

Returns the options for the timeline storage

hydrate_activities(activities)[source]

hydrates the activities using the activity_storage

index_of(activity_id)[source]

Returns the index of the activity id

Parameters:activity_id – the activity id
classmethod insert_activities(activities, **kwargs)[source]

Inserts an activity to the activity storage

Parameters:activity – the activity class
classmethod insert_activity(activity, **kwargs)[source]

Inserts an activity to the activity storage

Parameters:activity – the activity class
key_format = 'feed_%(user_id)s'
max_length = 100
needs_hydration(activities)[source]

checks if the activities are dehydrated

on_update_feed(new, deleted)[source]

A hook called when activities area created or removed from the feed

order_by(*ordering_args)[source]

Change default ordering

ordering_supported = False
remove(activity_id, *args, **kwargs)[source]
classmethod remove_activity(activity, **kwargs)[source]

Removes an activity from the activity storage

Parameters:activity – the activity class or an activity id
remove_many(activity_ids, batch_interface=None, trim=True, *args, **kwargs)[source]

Remove many activities

Parameters:activity_ids – a list of activities or activity ids
timeline_serializer

alias of SimpleTimelineSerializer

timeline_storage_class

alias of BaseTimelineStorage

trim(length=None)[source]

Trims the feed to the length specified

Parameters:length – the length to which to trim the feed, defaults to self.max_length
trim_chance = 0.01
class stream_framework.feeds.base.UserBaseFeed(user_id)[source]

Bases: stream_framework.feeds.base.BaseFeed

Implementation of the base feed with a different Key format and a really large max_length

key_format = 'user_feed:%(user_id)s'
max_length = 1000000
cassandra Module
memory Module
class stream_framework.feeds.memory.Feed(user_id)[source]

Bases: stream_framework.feeds.base.BaseFeed

activity_storage_class

alias of InMemoryActivityStorage

timeline_storage_class

alias of InMemoryTimelineStorage

redis Module
Subpackages
aggregated_feed Package
aggregated_feed Package
base Module
class stream_framework.feeds.aggregated_feed.base.AggregatedFeed(user_id)[source]

Bases: stream_framework.feeds.base.BaseFeed

Aggregated feeds are an extension of the basic feed. They turn activities into aggregated activities by using an aggregator class.

See BaseAggregator

You can use aggregated feeds to built smart feeds, such as Facebook’s newsfeed. Alternatively you can also use smart feeds for building complex notification systems.

Have a look at fashiolista.com for the possibilities.

Note

Aggregated feeds do more work in the fanout phase. Remember that for every user activity the number of fanouts is equal to their number of followers. So with a 1000 user activities, with an average of 500 followers per user, you already end up running 500.000 fanout operations

Since the fanout operation happens so often, you should make sure not to do any queries in the fanout phase or any other resource intensive operations.

Aggregated feeds differ from feeds in a few ways:

  • Aggregator classes aggregate activities into aggregated activities
  • We need to update aggregated activities instead of only appending
  • Serialization is different
add_many(activities, trim=True, current_activities=None, *args, **kwargs)[source]

Adds many activities to the feed

Unfortunately we can’t support the batch interface. The writes depend on the reads.

Also subsequent writes will depend on these writes. So no batching is possible at all.

Parameters:activities – the list of activities
add_many_aggregated(aggregated, *args, **kwargs)[source]

Adds the list of aggregated activities

Parameters:aggregated – the list of aggregated activities to add
aggregated_activity_class

alias of AggregatedActivity

aggregator_class

alias of RecentVerbAggregator

contains(activity)[source]

Checks if the activity is present in any of the aggregated activities

Parameters:activity – the activity to search for
get_aggregator()[source]

Returns the class used for aggregation

classmethod get_timeline_storage_options()[source]

Returns the options for the timeline storage

merge_max_length = 20
remove_many(activities, batch_interface=None, trim=True, *args, **kwargs)[source]

Removes many activities from the feed

Parameters:activities – the list of activities to remove
remove_many_aggregated(aggregated, *args, **kwargs)[source]

Removes the list of aggregated activities

Parameters:aggregated – the list of aggregated activities to remove
timeline_serializer

alias of AggregatedActivitySerializer

cassandra Module
redis Module
notification_feed Module

storage Package

base Module
class stream_framework.storage.base.BaseActivityStorage(serializer_class=None, activity_class=None, **options)[source]

Bases: stream_framework.storage.base.BaseStorage

The Activity storage globally stores a key value mapping. This is used to store the mapping between an activity_id and the actual activity object.

Example:

storage = BaseActivityStorage()
storage.add_many(activities)
storage.get_many(activity_ids)

The storage specific functions are located in

  • add_to_storage
  • get_from_storage
  • remove_from_storage
add(activity, *args, **kwargs)[source]
add_many(activities, *args, **kwargs)[source]

Adds many activities and serializes them before forwarding this to add_to_storage

Parameters:activities – the list of activities
add_to_storage(serialized_activities, *args, **kwargs)[source]

Adds the serialized activities to the storage layer

Parameters:serialized_activities – a dictionary with {id: serialized_activity}
get(activity_id, *args, **kwargs)[source]
get_from_storage(activity_ids, *args, **kwargs)[source]

Retrieves the given activities from the storage layer

Parameters:activity_ids – the list of activity ids
Returns dict:a dictionary mapping activity ids to activities
get_many(activity_ids, *args, **kwargs)[source]

Gets many activities and deserializes them

Parameters:activity_ids – the list of activity ids
remove(activity, *args, **kwargs)[source]
remove_from_storage(activity_ids, *args, **kwargs)[source]

Removes the specified activities

Parameters:activity_ids – the list of activity ids
remove_many(activities, *args, **kwargs)[source]

Figures out the ids of the given activities and forwards The removal to the remove_from_storage function

Parameters:activities – the list of activities
class stream_framework.storage.base.BaseStorage(serializer_class=None, activity_class=None, **options)[source]

Bases: object

The feed uses two storage classes, the - Activity Storage and the - Timeline Storage

The process works as follows:

feed = BaseFeed()
# the activity storage is used to store the activity and mapped to an id
feed.insert_activity(activity)
# now the id is inserted into the timeline storage
feed.add(activity)

Currently there are two activity storage classes ready for production:

  • Cassandra
  • Redis

The storage classes always receive a full activity object. The serializer class subsequently determines how to transform the activity into something the database can store.

activities_to_ids(activities_or_ids)[source]

Utility function for lower levels to chose either serialize

activity_class

alias of Activity

activity_to_id(activity)[source]
aggregated_activity_class

alias of AggregatedActivity

default_serializer_class

The default serializer class to use

alias of DummySerializer

deserialize_activities(serialized_activities)[source]

Serializes the list of activities

Parameters:
  • serialized_activities – the list of activities
  • serialized_activities – a dictionary with activity ids and activities
flush()[source]

Flushes the entire storage

metrics = <stream_framework.metrics.base.Metrics object>
serialize_activities(activities)[source]

Serializes the list of activities

Parameters:activities – the list of activities
serialize_activity(activity)[source]

Serialize the activity and returns the serialized activity

Returns str:the serialized activity
serializer

Returns an instance of the serializer class

The serializer needs to know about the activity and aggregated activity classes we’re using

class stream_framework.storage.base.BaseTimelineStorage(serializer_class=None, activity_class=None, **options)[source]

Bases: stream_framework.storage.base.BaseStorage

The Timeline storage class handles the feed/timeline sorted part of storing a feed.

Example:

storage = BaseTimelineStorage()
storage.add_many(key, activities)
# get a sorted slice of the feed
storage.get_slice(key, start, stop)
storage.remove_many(key, activities)

The storage specific functions are located in

add(key, activity, *args, **kwargs)[source]
add_many(key, activities, *args, **kwargs)[source]

Adds the activities to the feed on the given key (The serialization is done by the serializer class)

Parameters:
  • key – the key at which the feed is stored
  • activities – the activities which to store
count(key, *args, **kwargs)[source]
default_serializer_class

alias of SimpleTimelineSerializer

delete(key, *args, **kwargs)[source]
get_batch_interface()[source]

Returns a context manager which ensure all subsequent operations Happen via a batch interface

An example is redis.map

get_index_of(key, activity_id)[source]
get_slice(key, start, stop, filter_kwargs=None, ordering_args=None)[source]

Returns a sorted slice from the storage

Parameters:key – the key at which the feed is stored
get_slice_from_storage(key, start, stop, filter_kwargs=None, ordering_args=None)[source]
Parameters:
  • key – the key at which the feed is stored
  • start – start
  • stop – stop
Returns list:

Returns a list with tuples of key,value pairs

index_of(key, activity_or_id)[source]

Returns activity’s index within a feed or raises ValueError if not present

Parameters:
  • key – the key at which the feed is stored
  • activity_id – the activity’s id to search
remove(key, activity, *args, **kwargs)[source]
remove_from_storage(key, serialized_activities)[source]
remove_many(key, activities, *args, **kwargs)[source]

Removes the activities from the feed on the given key (The serialization is done by the serializer class)

Parameters:
  • key – the key at which the feed is stored
  • activities – the activities which to remove
trim(key, length)[source]

Trims the feed to the given length

Parameters:
  • key – the key location
  • length – the length to which to trim
memory Module
class stream_framework.storage.memory.InMemoryActivityStorage(serializer_class=None, activity_class=None, **options)[source]

Bases: stream_framework.storage.base.BaseActivityStorage

add_to_storage(activities, *args, **kwargs)[source]
flush()[source]
get_from_storage(activity_ids, *args, **kwargs)[source]
remove_from_storage(activity_ids, *args, **kwargs)[source]
class stream_framework.storage.memory.InMemoryTimelineStorage(serializer_class=None, activity_class=None, **options)[source]

Bases: stream_framework.storage.base.BaseTimelineStorage

add_to_storage(key, activities, *args, **kwargs)[source]
contains(key, activity_id)[source]
count(key, *args, **kwargs)[source]
delete(key, *args, **kwargs)[source]
classmethod get_batch_interface()[source]
get_index_of(key, activity_id)[source]
get_slice_from_storage(key, start, stop, filter_kwargs=None, ordering_args=None)[source]
remove_from_storage(key, activities, *args, **kwargs)[source]
trim(key, length)[source]
stream_framework.storage.memory.reverse_bisect_left(a, x, lo=0, hi=None)[source]

same as python bisect.bisect_left but for lists with reversed order

Subpackages
cassandra Package
cassandra Package
connection Module
redis Package
activity_storage Module
connection Module
timeline_storage Module
Subpackages
structures Package
base Module
hash Module
list Module
sorted_set Module

verbs Package

verbs Package
stream_framework.verbs.get_verb_by_id(verb_id)[source]
stream_framework.verbs.get_verb_storage()[source]
stream_framework.verbs.register(verb)[source]

Registers the given verb class

base Module
class stream_framework.verbs.base.Add[source]

Bases: stream_framework.verbs.base.Verb

id = 4
infinitive = 'add'
past_tense = 'added'
class stream_framework.verbs.base.Comment[source]

Bases: stream_framework.verbs.base.Verb

id = 2
infinitive = 'comment'
past_tense = 'commented'
class stream_framework.verbs.base.Follow[source]

Bases: stream_framework.verbs.base.Verb

id = 1
infinitive = 'follow'
past_tense = 'followed'
class stream_framework.verbs.base.Love[source]

Bases: stream_framework.verbs.base.Verb

id = 3
infinitive = 'love'
past_tense = 'loved'
class stream_framework.verbs.base.Verb[source]

Bases: object

Every activity has a verb and an object. Nomenclatura is loosly based on http://activitystrea.ms/specs/atom/1.0/#activity.summary

id = 0
serialize()[source]

Indices and tables