{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Bag: Parallel Lists for semi-structured data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Dask-bag excels in processing data that can be represented as a sequence of arbitrary inputs. We'll refer to this as \"messy\" data, because it can contain complex nested structures, missing fields, mixtures of data types, etc. The *functional* programming style fits very nicely with standard Python iteration, such as can be found in the `itertools` module.\n", "\n", "Messy data is often encountered at the beginning of data processing pipelines when large volumes of raw data are first consumed. The initial set of data might be JSON, CSV, XML, or any other format that does not enforce strict structure and datatypes.\n", "For this reason, the initial data massaging and processing is often done with Python `list`s, `dict`s, and `set`s.\n", "\n", "These core data structures are optimized for general-purpose storage and processing. Adding streaming computation with iterators/generator expressions or libraries like `itertools` or [`toolz`](https://toolz.readthedocs.io/en/latest/) let us process large volumes in a small space. If we combine this with parallel processing then we can churn through a fair amount of data.\n", "\n", "Dask.bag is a high level Dask collection to automate common workloads of this form. In a nutshell\n", "\n", " dask.bag = map, filter, toolz + parallel execution\n", " \n", "**Related Documentation**\n", "\n", "* [Bag documentation](https://docs.dask.org/en/latest/bag.html)\n", "* [Bag screencast](https://youtu.be/-qIiJ1XtSv0)\n", "* [Bag API](https://docs.dask.org/en/latest/bag-api.html)\n", "* [Bag examples](https://examples.dask.org/bag.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%run prep.py -d accounts" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, we'll use the distributed scheduler. Schedulers will be explained in depth [later](05_distributed.ipynb)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from dask.distributed import Client\n", "\n", "client = Client(n_workers=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can create a `Bag` from a Python sequence, from files, from data on S3, etc.\n", "We demonstrate using `.take()` to show elements of the data. (Doing `.take(1)` results in a tuple with one element)\n", "\n", "Note that the data are partitioned into blocks, and there are many items per block. In the first example, the two partitions contain five elements each, and in the following two, each file is partitioned into one or more bytes blocks." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# each element is an integer\n", "import dask.bag as db\n", "b = db.from_sequence([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], npartitions=2)\n", "b.take(3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# each element is a text file, where each line is a JSON object\n", "# note that the compression is handled automatically\n", "import os\n", "b = db.read_text(os.path.join('data', 'accounts.*.json.gz'))\n", "b.take(1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Edit sources.py to configure source locations\n", "import sources\n", "sources.bag_url" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Requires `s3fs` library\n", "# each partition is a remote CSV text file\n", "b = db.read_text(sources.bag_url,\n", " storage_options={'anon': True})\n", "b.take(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Manipulation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`Bag` objects hold the standard functional API found in projects like the Python standard library, `toolz`, or `pyspark`, including `map`, `filter`, `groupby`, etc..\n", "\n", "Operations on `Bag` objects create new bags. Call the `.compute()` method to trigger execution, as we saw for `Delayed` objects. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def is_even(n):\n", " return n % 2 == 0\n", "\n", "b = db.from_sequence([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\n", "c = b.filter(is_even).map(lambda x: x ** 2)\n", "c" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# blocking form: wait for completion (which is very fast in this case)\n", "c.compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: Accounts JSON data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We've created a fake dataset of gzipped JSON data in your data directory. This is like the example used in the `DataFrame` example we will see later, except that it has bundled up all of the entries for each individual `id` into a single record. This is similar to data that you might collect off of a document store database or a web API.\n", "\n", "Each line is a JSON encoded dictionary with the following keys\n", "\n", "* id: Unique identifier of the customer\n", "* name: Name of the customer\n", "* transactions: List of `transaction-id`, `amount` pairs, one for each transaction for the customer in that file" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "filename = os.path.join('data', 'accounts.*.json.gz')\n", "lines = db.read_text(filename)\n", "lines.take(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our data comes out of the file as lines of text. Notice that file decompression happened automatically. We can make this data look more reasonable by mapping the `json.loads` function onto our bag." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "js = lines.map(json.loads)\n", "# take: inspect first few elements\n", "js.take(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Basic Queries" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once we parse our JSON data into proper Python objects (`dict`s, `list`s, etc.) we can perform more interesting queries by creating small Python functions to run on our data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# filter: keep only some elements of the sequence\n", "js.filter(lambda record: record['name'] == 'Alice').take(5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def count_transactions(d):\n", " return {'name': d['name'], 'count': len(d['transactions'])}\n", "\n", "# map: apply a function to each element\n", "(js.filter(lambda record: record['name'] == 'Alice')\n", " .map(count_transactions)\n", " .take(5))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# pluck: select a field, as from a dictionary, element[field]\n", "(js.filter(lambda record: record['name'] == 'Alice')\n", " .map(count_transactions)\n", " .pluck('count')\n", " .take(5))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Average number of transactions for all of the Alice entries\n", "(js.filter(lambda record: record['name'] == 'Alice')\n", " .map(count_transactions)\n", " .pluck('count')\n", " .mean()\n", " .compute())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Use `flatten` to de-nest" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the example below we see the use of `.flatten()` to flatten results. We compute the average amount for all transactions for all Alices." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(js.filter(lambda record: record['name'] == 'Alice')\n", " .pluck('transactions')\n", " .take(3))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(js.filter(lambda record: record['name'] == 'Alice')\n", " .pluck('transactions')\n", " .flatten()\n", " .take(3))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(js.filter(lambda record: record['name'] == 'Alice')\n", " .pluck('transactions')\n", " .flatten()\n", " .pluck('amount')\n", " .take(3))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(js.filter(lambda record: record['name'] == 'Alice')\n", " .pluck('transactions')\n", " .flatten()\n", " .pluck('amount')\n", " .mean()\n", " .compute())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Groupby and Foldby" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Often we want to group data by some function or key. We can do this either with the `.groupby` method, which is straightforward but forces a full shuffle of the data (expensive) or with the harder-to-use but faster `.foldby` method, which does a streaming combined groupby and reduction.\n", "\n", "* `groupby`: Shuffles data so that all items with the same key are in the same key-value pair\n", "* `foldby`: Walks through the data accumulating a result per key\n", "\n", "*Note: the full groupby is particularly bad. In actual workloads you would do well to use `foldby` or switch to `DataFrame`s if possible.*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `groupby`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Groupby collects items in your collection so that all items with the same value under some function are collected together into a key-value pair." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "b = db.from_sequence(['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank'])\n", "b.groupby(len).compute() # names grouped by length" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "b = db.from_sequence(list(range(10)))\n", "b.groupby(lambda x: x % 2).compute()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "b.groupby(lambda x: x % 2).starmap(lambda k, v: (k, max(v))).compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `foldby`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Foldby can be quite odd at first. It is similar to the following functions from other libraries:\n", "\n", "* [`toolz.reduceby`](http://toolz.readthedocs.io/en/latest/streaming-analytics.html#streaming-split-apply-combine)\n", "* [`pyspark.RDD.combineByKey`](http://abshinn.github.io/python/apache-spark/2014/10/11/using-combinebykey-in-apache-spark/)\n", "\n", "When using `foldby` you provide \n", "\n", "1. A key function on which to group elements\n", "2. A binary operator such as you would pass to `reduce` that you use to perform reduction per each group\n", "3. A combine binary operator that can combine the results of two `reduce` calls on different parts of your dataset.\n", "\n", "Your reduction must be associative. It will happen in parallel in each of the partitions of your dataset. Then all of these intermediate results will be combined by the `combine` binary operator." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "b.foldby(lambda x: x % 2, binop=max, combine=max).compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example with account data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We find the number of people with the same name." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "# Warning, this one takes a while...\n", "result = js.groupby(lambda item: item['name']).starmap(lambda k, v: (k, len(v))).compute()\n", "print(sorted(result))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "# This one is comparatively fast and produces the same result.\n", "from operator import add\n", "def incr(tot, _):\n", " return tot + 1\n", "\n", "result = js.foldby(key='name', \n", " binop=incr, \n", " initial=0, \n", " combine=add, \n", " combine_initial=0).compute()\n", "print(sorted(result))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise: compute total amount per name" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We want to groupby (or foldby) the `name` key, then add up the all of the amounts for each name.\n", "\n", "Steps\n", "\n", "1. Create a small function that, given a dictionary like \n", "\n", " {'name': 'Alice', 'transactions': [{'amount': 1, 'id': 123}, {'amount': 2, 'id': 456}]}\n", " \n", " produces the sum of the amounts, e.g. `3`\n", " \n", "2. Slightly change the binary operator of the `foldby` example above so that the binary operator doesn't count the number of entries, but instead accumulates the sum of the amounts." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DataFrames" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the same reasons that Pandas is often faster than pure Python, `dask.dataframe` can be faster than `dask.bag`. We will work more with DataFrames later, but from the point of view of a Bag, it is frequently the end-point of the \"messy\" part of data ingestion—once the data can be made into a data-frame, then complex split-apply-combine logic will become much more straight-forward and efficient.\n", "\n", "You can transform a bag with a simple tuple or flat dictionary structure into a `dask.dataframe` with the `to_dataframe` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df1 = js.to_dataframe()\n", "df1.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This now looks like a well-defined DataFrame, and we can apply Pandas-like computations to it efficiently." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using a Dask DataFrame, how long does it take to do our prior computation of numbers of people with the same name? It turns out that `dask.dataframe.groupby()` beats `dask.bag.groupby()` by more than an order of magnitude; but it still cannot match `dask.bag.foldby()` for this case." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%time df1.groupby('name').id.count().compute().head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Denormalization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This DataFrame format is less-than-optimal because the `transactions` column is filled with nested data so Pandas has to revert to `object` dtype, which is quite slow in Pandas. Ideally we want to transform to a dataframe only after we have flattened our data so that each record is a single `int`, `string`, `float`, etc.." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def denormalize(record):\n", " # returns a list for each person, one item per transaction\n", " return [{'id': record['id'], \n", " 'name': record['name'], \n", " 'amount': transaction['amount'], \n", " 'transaction-id': transaction['transaction-id']}\n", " for transaction in record['transactions']]\n", "\n", "transactions = js.map(denormalize).flatten()\n", "transactions.take(3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = transactions.to_dataframe()\n", "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "# number of transactions per name\n", "# note that the time here includes the data load and ingestion\n", "df.groupby('name')['transaction-id'].count().compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Limitations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Bags provide very general computation (any Python function.) This generality\n", "comes at cost. Bags have the following known limitations\n", "\n", "1. Bag operations tend to be slower than array/dataframe computations in the\n", " same way that Python tends to be slower than NumPy/Pandas\n", "2. ``Bag.groupby`` is slow. You should try to use ``Bag.foldby`` if possible.\n", " Using ``Bag.foldby`` requires more thought. Even better, consider creating\n", " a normalised dataframe." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learn More\n", "\n", "* [Bag documentation](https://docs.dask.org/en/latest/bag.html)\n", "* [Bag screencast](https://youtu.be/-qIiJ1XtSv0)\n", "* [Bag API](https://docs.dask.org/en/latest/bag-api.html)\n", "* [Bag examples](https://examples.dask.org/bag.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Shutdown" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "client.shutdown()" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 4 }