Clojure for Data Science
http://clojuredatascience.com/
This is the blog to accompany the book Clojure for Data Science published by Packt.
Sun, 06 Jan 2019 17:28:01 +0000
clj-rss
http://clojuredatascience.com/posts/2019-01-05-boring-data-science-with-kixi-stats.html
http://clojuredatascience.com/posts/2019-01-05-boring-data-science-with-kixi-stats.html
Boring data science with kixi.stats
<p>I'm pleased to announce that <a href='https://cljdoc.org/d/kixi/stats/0.5.0/doc/readme'>kixi.stats v0.5.0</a> has been released. Some of the new goodies are detailed below, but since it's early January 2019 (and close to the 3rd anniversary of the first commit), I can't help but reflect on the new features in the context of kixi.stats' evolution since its inception as well.</p><p><a href='https://github.com/mastodonc/kixi.stats'>kixi.stats</a> was originally concieved as <i>a library of statistical reducing functions</i>, which is to say functions which can be supplied to <a href='https://clojure.org/reference/transducers'><code>transduce</code></a>. Accordingly, almost the entirety of <a href='https://cljdoc.org/d/kixi/stats/0.5.0/api/kixi.stats.core'>kixi.stats.core</a> is given over to reducing functions (each of which accepts elements from a sequence in turn and subsequently returns some derived value once the sequence is exhausted). Routines as trivial as calculating the <code>min</code> & <code>max</code>, to those which calculate descriptive statistics such as the <code>standard-deviation</code> or <code>kurtosis</code>, to those performing inferential hypothesis tests such as the <code>z-test</code> and <code>chi-squared-test</code> are all expressed as reducing functions.</p><p>In his discursive video <a href='https://www.youtube.com/watch?v=VPp0ahoQR3o'>What Clojure needs to grow</a>, <a href='https://twitter.com/ericnormand'>Eric Normand</a> presents his view that frameworks tackling the more mundane aspects data science (and web development) are necessary for Clojure to reach a broader audience. In particular, he seeks more <i>boring data science</i> support. I'm not going to get into the framework vs library debate here, but for <a href='https://www.infoq.com/articles/data-science-abstraction'>many reasons I detailed for InfoQ</a> a while ago, I think that Clojure's natural strengths make it a great fit for doing daily data science.</p><p>It's in the area of statistical inference where recent kixi.stats work has been most focused. Boring data science sometimes means routine hypothesis testing on small datasets, and kixi.stats finally has <code>t-test</code> (implementing <a href='https://en.wikipedia.org/wiki/Welch%27s_t-test'>Welch's unequal variances</a> t-test) and <code>simple-t-test</code> functions. In addition they—together with the <code>z-test</code> and <code>chi-squared-test</code> functions—all now return an implementation of the <code>PTestResult</code> protocol. This mandates implementations of <code>p-value</code> and <code>significant?</code> (with the option to override default one- or two-tailed test behaviour).</p><p><img src="/img/kixi-stats-regression-intervals.png" alt="Linear regression confidence & prediction intervals" /></p><p>kixi.stats is now over 3 years old, and the self-imposed goal of meeting all challenges with a new reducing function has become a hinderance to further development. The library has grown to include 7 namespaces: <code>core</code>, <code>digest</code>, <code>distribution</code>, <code>estimate</code>, <code>math</code>, <code>protocols</code>, and <code>test</code>. The <a href='https://cljdoc.org/d/kixi/stats/0.5.0/api/kixi.stats.estimate'>estimate</a> namespace is new in the latest version. It contains a handful of functions, each of which accepts a <code>sum-squares</code> covariance digest and returns an estimate, e.g. of the line of best fit, regression or prediction interval at a given alpha value.</p><p>Hopefully the above serves to illustrate how kixi.stats has grown to include—in addition to simple descriptive scalar statistics—support for models, inference and more. I'm excited about each new release, and building confidence that there is room to complement the more specialist data science heavyweights such as the <a href='https://github.com/uncomplicate'>uncomplicate libraries</a> or <a href='https://mxnet.apache.org/api/clojure/index.html'>MXNet</a>. I'm not ready to uninstall RStudio just yet, but the roadmap for the coming year should bring kixi.stats, and Clojure, some really <strong>boring</strong> new features. </p><p>Do you have an idea of what you'd like a boring data science framework to be? <a href='mailto:henry@henrygarner.com'>Email</a> or <a href='https://twitter.com/henrygarner'>Tweet</a> me!</p><p><i>Henry Garner is the author of kixi.stats.</i></p>
Sat, 05 Jan 2019 00:00:00 +0000
http://clojuredatascience.com/posts/2018-09-15-data-science-with-kixi-stats.html
http://clojuredatascience.com/posts/2018-09-15-data-science-with-kixi-stats.html
Data Science Screencasts from Lambda Island
<p><a href='https://lambdaisland.com/'>Lambda Island</a> has produced a high-quality Clojure-focused screencast every few weeks over the past couple of years. <a href='https://twitter.com/plexus'>Arne Brasseur</a> is the author and producer of these videos. To date he's generated diverse walk-throughs on language fundamentals, library APIs, architectural considerations, and development tooling. The most recent two screencasts, numbers 43 & 44, have signalled a move into the realm of data science. Both screencasts make heavy use of <a href='https://github.com/mastodonC/kixi.stats'>kixi.stats</a>: a library of statistical reducing functions. As the author of kixi.stats, I'm relieved to say that I think that the topics are dealt with really well.</p><p>The screencasts are intended for a broad audience, and the material covered is beginner-friendly. The <a href='https://lambdaisland.com/episodes/clojure-data-science-kixi-stats'>first video</a> could more accurately be titled <i>"Exploratory data analysis with Clojure's Transducers"</i>, but in any case this is worthy material for a video of this length. If you want to understand how Clojure's transducers can be used to create summary statistics in a single pass over the data, and how to render empirical distributions to histograms and interpret them, this is a great place to start.</p><p>The <a href='https://lambdaisland.com/episodes/clojure-data-science-kixi-stats-2'>second video</a> builds towards the creation of a simple linear model. Along the way, standard data science concerns regarding data scrubbing, feature selection and model evaluation are covered. The screencast also demonstrates how to make use of transducers and composite reducing functions to achieve non-trivial results. For example, at one point we are shown how to create a reducing function which returns a linear model expressed as a function of <i>x</i> and <i>y</i>. Chaining a post-complete function to a reducing function to return a function?! It's functions all the way down!</p><ul><li><a href='https://lambdaisland.com/episodes/clojure-data-science-kixi-stats'>Data Science with kixi.stats, part 1</a></li><li><a href='https://lambdaisland.com/episodes/clojure-data-science-kixi-stats-2'>Data Science with kixi.stats, part 2</a></li></ul><p>A free trial is available, and discounts are available for those unable to cover the full fee. Whether or not you decide to subscribe, the code for each episode is freely accessible on GitHub. For example, <a href='https://github.com/lambdaisland/ep43-data-science-kixi-stats'>here's the code for episode 43</a>.</p><p>For me, the graphs namespace is particularly noteworthy because Clojure currently lacks a flexible high-level charting library. As a complete tangent to the main content of his screencasts, Arne has done an excellent job of spiking out a set of functions which could form the basis of such a library. Watch this space...</p><p><i>Disclosure: Henry was granted free access to Lambda Island in exchange for advising on the content of the episodes linked above.</i></p>
Sat, 15 Sep 2018 00:00:00 +0100
http://clojuredatascience.com/posts/2016-12-02-data-science-ladder-abstraction.html
http://clojuredatascience.com/posts/2016-12-02-data-science-ladder-abstraction.html
Data Science Up and Down the Ladder of Abstraction
<p> <a href='https://twitter.com/henrygarner'>Henry</a> wrote an article for InfoQ as part of their <a href='https://www.infoq.com/handle-data-science'>Getting a Handle on Data Science</a> series.</p><p>The article was titled <a href='https://www.infoq.com/articles/data-science-abstraction'>Data Science Up and Down the Ladder of Abstraction</a> and in it he set out his reasons for adopting Clojure, including its:</p><ul><li>Excellent tooling for REPL-driven development and notebooks for exploratory analysis and live charting.</li><li>Suitability for creating shareable browser-based visualisations and interactive simulations with modern UI frameworks like Facebook's React.</li><li>Interoperability with Java, JavaScript, and Scala for making use of established and actively developed numerical optimisation, distributed computing, and machine learning libraries.</li><li>Data-oriented functional core with a focus on composable sequence processing interfaces which scale with the volume of data you have.</li><li>Clean and unambiguous syntax which even aids my comprehension of transcribed algorithms.</li><li>Macro system which enables extensibility at the language level for custom algorithmic abstractions and cleaner code.</li></ul><p>You can read the full article <a href='https://www.infoq.com/articles/data-science-abstraction'>here</a>.</p>
Fri, 02 Dec 2016 00:00:00 +0000
http://clojuredatascience.com/posts/2016-12-01-clojure-machine-learning.html
http://clojuredatascience.com/posts/2016-12-01-clojure-machine-learning.html
Clojure for Machine Learning
<br /><p><a href='https://twitter.com/henrygarner'>Henry</a> gave a talk at London's <a href='https://skillsmatter.com/conferences/7430-clojure-exchange-2016#program'>Clojure eXchange 2016</a> on the subject of machine learning in Clojure.</p><p>From the programme:</p><p><blockquote><p> You have heard that Clojure’s ancestor language, Lisp, was developed for artificial intelligence research. Yet, until recently, Clojure’s support for machine learning has mostly consisted of wrapped Java libraries.</p></p><p><p>Informed by his work as a freelance data scientist working primarily with Clojure, Henry will explore some newer libraries that enable sophisticated data analysis unencumbered by Java’s legacy. You will also learn how Clojure’s core sequence-processing strengths naturally lend themselves to simple machine learning techniques you can try at home. </p></blockquote></p><p>You can see a <a href='https://skillsmatter.com/skillscasts/9050-clojure-for-machine-learning'>video of the talk</a> on the Skills Matter website, the <a href='https://github.com/henrygarner/cljx-december-2016'>org-mode slides</a>, or a PDF of the presentation below.</p><p><script async class="speakerdeck-embed" data-id="8afdab93e5b74236848186367c000ef7" data-ratio="1.37081659973226" src="//speakerdeck.com/assets/embed.js"></script></p>
Thu, 01 Dec 2016 00:00:00 +0000
http://clojuredatascience.com/posts/2015-12-05-expressive-parallel-analytics-with-clojure.html
http://clojuredatascience.com/posts/2015-12-05-expressive-parallel-analytics-with-clojure.html
Expressive Parallel Analytics with Clojure
<p><a href='https://twitter.com/henrygarner'>Henry</a> spoke at <a href='https://skillsmatter.com/conferences/6861-clojure-exchange-2015#program'>Clojure eXchange 2015</a> on the subject of parallel folds using Clojure's transducers.</p><p>From the programme:</p><p><blockquote><p> Sharing experience gained from his work on a mission-critical data product earlier this year, Henry will speak about some newer features of Clojure that enable data scientists to write concise, expressive and performant data processing code. He’ll explore transducers and reducing functions, and show how simple functional combinators can make even sophisticated analytical code both faster and easier to comprehend. </p></blockquote></p><p>The key takeaway is that with transducers, reducing functions must now provide an arity-1 <strong>completer</strong> implementation which provides them with the opportunity to convert an accumulated value into a final output. Together with the (optional) arity-0 <strong>initializer</strong> this provides reducing functions the ability to wrap arbitrarily intricate state. The natural composability of transducers can be mirrored with similarly composable reducing functions to build up sophisticated parallel computations.</p><p>You can see a <a href='https://skillsmatter.com/skillscasts/7243-expressive-parallel-analytics-with-clojure'>video of the talk</a> on Skills Matter's website, the <a href='https://github.com/henrygarner/cljx-december-2015'>org-mode slides</a>, or a PDF of the presentation below.</p><p><script async class="speakerdeck-embed" data-id="9945344f8783426d8e5fdffc90d5d818" data-ratio="1.37081659973226" src="//speakerdeck.com/assets/embed.js"></script></p>
Sat, 05 Dec 2015 00:00:00 +0000
http://clojuredatascience.com/posts/2015-09-12-parallel-folds-reducers-tesser-linear-regression.html
http://clojuredatascience.com/posts/2015-09-12-parallel-folds-reducers-tesser-linear-regression.html
Parallel Folds: Reducers, Tesser and Linear Regression
<p>In this article, extracted from <em>Chapter 5, <a href='https://www.packtpub.com/big-data-and-business-intelligence/clojure-data-science'>Clojure for Data Science</a></em>, I'll show some principles of efficient big data analysis and how they can be applied using Clojure. I'll be using three Clojure libraries: <strong>reducers</strong>, <strong>Iota</strong>, and <strong>Tesser</strong> to show how the calculation of statistics can be scaled to very large volumes of data through parallelism and by avoiding unneccessary iterations over the data.</p><p>By the end of this article you'll be able to build a predictive model using linear regression. Linear regression is a machine learning algorithm that attempts to learn a linear relationship between a single output (usually called the <em>dependent variable</em>) and one or more inputs (often called the <em>independent variables</em>). We'll be using data from the U.S. Internal Revenue Service (IRS) on ZIP code-level statistics of income, and attempt to learn a simple linear relationship between two variables: the "salaries and wages" and "unemployment compensation" figures.</p><h2><a name="download_the_code_and_data"></a>Download the code and data</h2><p>All the code is contained in the example project at <a href='https://github.com/clojuredatascience/ch5-big-data'>https://github.com/clojuredatascience/ch5-big-data</a>. If you'd like to follow along with any of the examples, clone this repository to your local machine.</p><h3><a name="download_the_data"></a>Download the data</h3><p>The IRS data I'll use for the examples contains selected income and tax items classified by state, ZIP code, and income classes. It's 100MB in size and should be downloaded from <a href='http://www.irs.gov/pub/irs-soi/12zpallagi.csv'>http://www.irs.gov/pub/irs-soi/12zpallagi.csv</a> to the example code's <code>data</code> directory. Since the file contains the IRS <em>Statistics of Income</em> we've renamed the file to "soi.csv" for the examples.</p><p>If you're running *nix or OS X, there's a little shell script in the project which will download and rename the data for you. Run it on the command line within the project's directory like this:</p><pre><code class="sh">script/download-data.sh
</code></pre><p>Alternatively, if you're on Windows or would prefer to follow manual instructions:<ul><li>Download <code>12zpallagi.csv</code> into the sample code's <code>data</code> directory using the link above</li><li>Rename the file <code>12zpallagi.csv</code> to <code>soi.csv</code></li></ul><h3><a name="running_the_examples"></a>Running the examples</h3></p><p>The example project contains a namespace called <code>cljds.ch5.examples</code>. Each example below is a function in this namespace that you can run in one of two ways: either from the REPL or on the command line, with Leiningen. If you'd like to run the examples in the REPL execute:</p><pre><code class="sh">lein repl
</code></pre><p>on the command line. By default the REPL will open in the examples namespace and you can type the code you want to evaluate.</p><p>Alternatively, to run a specific numbered example you can execute:</p><pre><code class="sh">lein run –-example 5.1
</code></pre><p>or the pass the single-letter equivalent:</p><pre><code class="sh">lein run –e 5.1
</code></pre><p>This will run the function named <code>ex-5-1</code>. <strong>If you've followed the instructions above to download the data</strong>, you should now be able to run the examples.</p><h3><a name="inspect_the_data"></a>Inspect the data</h3><p>Take a look at the column headings in the first line of the file. One way to do this is to load the file into memory, split on newline characters, and take the first result. The Clojure core library function <code>slurp</code> will return the whole file as a string, <code>split</code> in the <code>clojure.string</code> namespace can chop the contents into lines based on the newline character, and first will return line 1:</p><pre><code class="clojure">(require '[clojure.string :as str])
(defn ex-5-1 []
(-> (slurp "data/soi.csv")
(str/split #"\n")
(first)))
</code></pre> <br /><p>The file is around 100MB in size on disk. When loaded into memory and converted to object representations the data will occupy more space. This is incredibly wasteful when we're only interested in the first row.</p><p>Fortunately we don't have to load the whole file into memory if we take advantage of Clojure's lazy sequences. Instead of returning a string representation of the contents of the whole file, we could return a reference to the file and then step through it one line at a time:</p><pre><code class="clojure">(require '[clojure.java.io :as io])
(defn ex-5-2 []
(-> (io/reader "data/soi.csv")
(line-seq)
(first)))
</code></pre><p>In the above code we're using <code>clojure.java.io/reader</code> to return a reference to the file and using the core function <code>line-seq</code> to return a lazy sequence of lines from the file. In this way we can read files even larger than available memory one line at a time.</p><p>The second approach is a much more efficient way of fetching the column headings. They're replicated below:</p><pre><code>"STATEFIPS,STATE,zipcode,AGI_STUB,N1,MARS1,MARS2,MARS4,PREP,N2,NUMDEP,A00100,N00200,A00200,N00300,A00300,N00600,A00600,N00650,A00650,N00900,A00900,SCHF,N01000,A01000,N01400,A01400,N01700,A01700,N02300,A02300,N02500,A02500,N03300,A03300,N00101,A00101,N04470,A04470,N18425,A18425,N18450,A18450,N18500,A18500,N18300,A18300,N19300,A19300,N19700,A19700,N04800,A04800,N07100,A07100,N07220,A07220,N07180,A07180,N07260,A07260,N59660,A59660,N59720,A59720,N11070,A11070,N09600,A09600,N06500,A06500,N10300,A10300,N11901,A11901,N11902,A11902"
</code></pre><p>This is a wide file! There are 77 columns overal, so we won't identify them all. The key fields we'll be interested in are:</p><ul><li><strong>N1</strong>: The number of returns</li><li><strong>A00200</strong>: The salaries and wages amount</li><li><strong>A02300</strong>: The unemployment compensation amount</li></ul><p>If you're curious about what else is contained in the file, the IRS data definition document is available at <a href='http://www.irs.gov/pub/irs-soi/12zpdoc.doc'>http://www.irs.gov/pub/irs-soi/12zpdoc.doc</a>.</p><h3><a name="counting_the_records"></a>Counting the records</h3><p>Our file is certainly wide, but is it tall? Let's see how many rows there are in the file. Having created a lazy sequence this is a simple matter of counting the length of the sequence.</p><pre><code class="clojure"> (defn ex-5-3 []
(-> (io/reader "data/soi.csv")
(line-seq)
(count)))
</code></pre><p>The above example returns 166,905 including the header row, so we know there are actually 166,904 rows in the file.</p><p>The function <code>count</code> is the simplest way to count the number of elements in a sequence. For vectors (and other types implementing the <code>Counted</code> interface) this is also the most efficient, since the collection already knows how many elements it contains and therefore doesn't need to recalculate it. For lazy sequences however, the only way to determine how many elements are contained in the sequence is to step through it from beginning to end.</p><p>Clojure's implementation of <code>count</code> is written in Java, but it can be pictured as a reduce over the sequence like this:</p><pre><code class="clojure">(defn ex-5-4 []
(->> (io/reader "data/soi.csv")
(line-seq)
(reduce (fn [i x]
(inc i)) 0)))
</code></pre><p>The function we pass to reduce above accepts a counter <code>i</code> and the next element from the sequence, <code>x</code>. For each <code>x</code> we simply increment the counter <code>i</code>. The <code>reduce</code> function accepts a 'initial value' of zero which represents the concept of 'nothing'. If there are no lines to reduce over, zero will be returned.</p><p>As of version 1.5, Clojure offers the Reducers library (<a href='http://clojure.org/reducers'>http://clojure.org/reducers</a>) which provides an alternative way of performing reductions that trades memory efficiency for speed.</p><h3><a name="the_reducers_library"></a>The reducers library</h3><p>The function we implemented above to count the records is a <em>sequential</em> algorithm. Each line is processed one-at-a-time until the sequence is exhausted. But there's nothing about the operation that demands it be done like this: we could split the number of lines into two sequences (ideally of roughly equal length) and reduce over each sequence independently. When we're done, we would just add together the total number of lines from each sequence together to get the grand total number of lines in the file.</p><p><img src="/img/parallel-reduce.png" alt="Parallel reduce" /></p><p>If each reduce ran on its own processing unit then the two count operations could be run in parallel. If we ignore the cost of splitting and recombining the sequences (which we can't, but it's often small compared to the work of the reduction itself), the algorithm could run twice as fast.</p><p>This is one of the aims of reducers: to bring the benefit of parallelism to algorithms implemented on a single machine by taking advantage of multiple cores.</p><h3><a name="parallel_folds_with_reducers"></a>Parallel folds with reducers</h3><p>The parallel version of <code>reduce</code> implemented by the reducers library is called <code>fold</code>. To construct a fold, we have to supply a <em>combiner</em> function (in addition to the <em>reducer</em> function) that will take the results of our reduced sequences (the partial row counts) and return the grant total result. Since our row counts are numbers, the combiner function is simply <code>+</code>.</p><p>The previous example, adjusted to use <code>clojure.core.reducers</code> as <code>r</code>, looks like this:</p><pre><code class="clojure">(require '[clojure.core.reducers :as r])
(defn ex-5-5 []
(->> (io/reader "data/soi.csv")
(line-seq)
(r/fold + (fn [i x]
(inc i)))))
</code></pre><p>The combiner function, <code>+</code>, has been included as the <strong>first</strong> argument to fold and our unchanged reduce function is supplied as the second argument. We no longer need to pass the initial value of zero: <code>fold</code> will get the initial value by calling the combiner function with no arguments. Our example above works because <code>+</code> called with no arguments already returns zero:</p><pre><code class="clojure">(defn ex-5-6 []
(+))
;; 0
</code></pre>To participate in folding then, it's important that the combiner function have two implementations: one with zero arguments that returns the identity value, and another with two arguments that 'combines' the arguments. Different folds will require different combiner functions and identity values of course. For example, the identity value for multiplication is 1.<p>We can visualize the process of seeding the computation with an identity value, iteratively reducing over the sequence of $xs$, and combining the reductions into an output value as a tree:</p><p><img src="/img/reductions-tree.png" alt="Reductions tree" /></p><p>There may be more than two reductions to combine, of course, and in fact the default implementation of <code>fold</code> will split the input collection into chunks of 512 elements. Our 166,000-element sequence will therefore generate 325 reductions to be combined. We're going to run out of page real estate quite quickly with a tree representation diagrams, so let's instead visualize the process more schematically: as a two-step <strong>reduce</strong> and <strong>combine</strong> process.</p><p>The first step performs a parallel reduce across all the chunks in the collection. The second step performs a serial reduce over the intermediate results to arrive at the final result:</p><p><img src="/img/reduce-combine.png" alt="Reduce-combine" /></p><p>The preceding diagram shows a reduce over a several sequences of $xs$ in parallel, represented as circles, into a series of outputs, represented as squares. The squares are then combined serially to produce the final result, represented by a star.</p><h3><a name="loading_large_files_with_iota"></a>Loading large files with Iota</h3><p>Calling <code>fold</code> on a lazy sequence requires Clojure to realize the sequence into memory, and then chunk the sequence into groups for parallel execution. For situations where the calculation performed on each row is small, the overhead involved in this coordination outweighs the benefit of parallelism. We can improve the situation slightly by using a library called Iota (<a href='https://github.com/thebusby/iota'>https://github.com/thebusby/iota</a>).</p><p>The Iota library loads files directly into data structures suitable for folding over with reducers and can handle files larger than available memory by making use of memory-mapped files. With Iota in place of our <code>line-seq</code> function our line count function becomes:</p><pre><code class="clojure">(require 'iota)
(defn ex-5-7 []
(->> (iota/seq "data/soi.csv")
(r/fold + (fn [i x]
(inc i)))))
</code></pre><p>So far we've just been working with sequences of unformatted lines, but if we're going to do anything more than count the rows we'll want to parse them into a more useful data structure. This is another area Clojure's reducers can help make our code more efficient.</p><h3><a name="create_a_reducers_processing_pipeline"></a>Create a reducers processing pipeline</h3><p>We already know (from the header row) that the file is comma-separated, so let's first create a function to turn each row into a vector of fields. All fields but the first two contain numeric data, so let's parse them into doubles while we're at it:</p><pre><code class="clojure">(defn parse-double [x]
(Double/parseDouble x))
(defn parse-line [line]
(let [[text-fields double-fields] (->> (str/split line #",")
(split-at 2))]
(concat text-fields
(map parse-double double-fields))))
</code></pre><p>We'll use the reducers version of <code>map</code> to apply our <code>parse-line</code> function to each of the lines from the file in turn:</p><pre><code class="clojure">(defn ex-5-8 []
(->> (iota/seq "data/soi.csv")
(r/drop 1)
(r/map parse-line)
(r/take 1)
(into [])))
;; [("01" "AL" 0.0 1.0 889920.0 490850.0 ...)]
</code></pre><p>The final <code>into</code> converts the reducers' internal representation (a <em>reducible collection</em>) into a Clojure vector. The previous example should return a sequence of 77 elements representing each column the first row of the file after the header.</p><p>We're just dropping the column names at the moment, but it would be great if we could make use of these to return a map representation of each record associating the column name with the field value. The keys of the map would be the column headings and the values would be the parsed fields. The core library function <code>zipmap</code> will create a map out of two sequences: one for the keys and another for the values:</p><pre><code class="clojure">(defn parse-columns [line]
(->> (str/split line #",")
(map keyword)))
(defn ex-5-9 []
(let [data (iota/seq "data/soi.csv")
column-names (parse-columns (first data))]
(->> (r/drop 1 data)
(r/map parse-line)
(r/map (fn [fields]
(zipmap column-names fields)))
(r/take 1)
(into []))))
</code></pre>This function returns a map representation of each row, a much more user-friendly data structure:<pre><code class="clojure">[{:N2 1505430.0, :A19300 181519.0, :MARS4 256900.0 ...}]
</code></pre><p>A great thing about Clojure's reducers is that in the above computation calls to <code>r/map</code>, <code>r/drop</code> and <code>r/take</code> are compiled into a <em>reduction</em> that's performed in a single iteration over the data. This becomes particularly valuable as the number of operations increases.</p><p>For example, let's assume we'd also like to filter out zero ZIP codes. We could extend the reducers pipeline like this:</p><pre><code class="clojure">(defn ex-5-10 []
(let [data (iota/seq "data/soi.csv")
column-names (parse-columns (first data))]
(->> (r/drop 1 data)
(r/map parse-line)
(r/map (fn [fields]
(zipmap column-names fields)))
(r/remove (fn [record]
(zero? (:zipcode record))))
(r/take 1)
(into []))))
</code></pre><p>The <code>r/remove</code> step is now also being run together with the <code>r/map</code>, <code>r/drop</code> and <code>r/take</code> calls. As the size of data increases it becomes increasingly important to avoid making multiple iterations over the data unnecessarily. Using Clojure's reducers ensures that our calculations are compiled into just a single iteration.</p><h3><a name="curried_reductions_with_reducers"></a>Curried reductions with reducers</h3><p>To make it clearer to see what's going on we can create a <em>curried</em> version of each of our above steps: to parse the lines, create a record from the fields, and filter zero ZIP codes. The curried version of the function is a reduction <em>"waiting for a collection"</em>:</p><pre><code class="clojure">(def line-formatter
(r/map parse-line))
(defn record-formatter [column-names]
(r/map (fn [fields]
(zipmap column-names fields))))
(def remove-zero-zip
(r/remove (fn [record]
(zero? (:zipcode record)))))
</code></pre><p>In each case we're calling one of reducers' functions but without providing a collection to reduce over. The result is a function that can be applied to the collection at a later time, or composed together into a single <code>parse-file</code> function using <code>comp</code>:</p><pre><code class="clojure">(defn load-data [file]
(let [data (iota/seq file)
column-names (parse-columns (first data))
parse-file (comp remove-zero-zip
(record-formatter column-names)
line-formatter)]
(parse-file (rest data))))
</code></pre><p>It's only when the <code>parse-file</code> function is called with a sequence, as in the last line of the preceding example, that the pipeline is actually executed.</p><h3><a name="statistical_folds_with_reducers"></a>Statistical folds with reducers</h3><p>With the data parsed it's time to perform some descriptive statistics. Let's assume we'd like to know the average <em>number of returns</em> (column N1) submitted to the IRS by ZIP code. One way of doing this is to add up the values and divide by the count. Our first attempt might look like this:</p><pre><code class="clojure">(defn ex-5-11 []
(let [data (load-data "data/soi.csv")
xs (into [] (r/map :N1 data))]
(/ (reduce + xs)
(count xs))))
;; 853.37
</code></pre><p>While this works, it's comparatively slow. We iterate over the data once to create the <code>xs</code>, a second time to calculate the sum using <code>reduce</code> and a third time to calculate the <code>count</code>. The bigger our dataset gets, the larger the time penalty we'll pay for this repetition.</p><p>Ideally, we'd be able to calculate the mean value in a single iteration over the data, just like our <code>parse-file</code> function above (even better if we can perform it in parallel, too).</p><h3><a name="associativity"></a>Associativity</h3><p>Before we proceed, it's useful to take a moment to reflect on why the following code wouldn't do what we want:</p><pre><code class="clojure">(defn mean
([] 0)
([x y] (/ (+ x y) 2)))
</code></pre><br /><p>The <code>mean</code> function we've just defined is a function of two arities. This means that it has two different implementations and which is actually called depends on the number of arguments provided when the function is called. Without any arguments it returns 0 (the identity for the mean computation) and with two arguments it returns their mean.</p><pre><code class="clojure">(defn ex-5-12 []
(->> (load-data "data/soi.csv")
(r/map :N1)
(r/fold mean)))
;; 930.54
</code></pre>We obtained a mean of 853.37 previously. The example above folds over the data in column N1 with our <code>mean</code> function and produces a different result. Which is correct?<p>Well, if we could expand out the computation in the preceding example for the first three <code>xs</code>, we might see something like the following:</p><pre><code class="clojure">(mean (mean (mean 0 a) b) c)
</code></pre><p>This isn't the calculation we want to perform. In essence, each time we call <code>mean</code> with the results of a previous <code>mean</code>, we're taking "an average of an average". Technically speaking this is a bad idea because the mean function is not <em>associative</em>. For an associative function, the following equality holds true:</p><p>$$ f(f(a,b),c)=f(a,f(b,c)) $$</p><p>Addition and subtraction are associative but multiplication and division are not, so the <code>mean</code> function is not associative either. Contrast the <code>mean</code> function usage with the following using <code>+</code>:</p><pre><code class="clojure">(+ 1 (+ 2 3))
</code></pre><p>which yields an identical result to:</p><pre><code class="clojure">(+ (+ 1 2) 3)
</code></pre><p>It doesn't matter how we group the sub-calculations with <code>+</code> because addition is associative. Associativity is an important property of functions used with <code>fold</code> because, by definition, the results of a previous calculation are treated as inputs to the next.</p><p>The easiest way to calculate the mean in an associative way is to calculate the sum and the count separately. Since both the sum and the count can be calculated with <code>+</code>, they're associative and can be calculated in parallel over the data with <code>fold</code>.</p><p>The mean can then be calculated as the very last step simply by dividing one by the other.</p><h3><a name="calculating_the_mean_using_fold"></a>Calculating the mean using fold</h3><p>We'll create a fold to calculate the mean with two custom functions, <code>mean-combiner</code> and <code>mean-reducer</code>. This requires defining three entities:</p><ul><li>The identity value for the fold</li><li>The reducer function</li><li>The combiner function</li></ul><p>We discovered the benefits of associativity in the previous section, and so we'll want to update our intermediate mean using only associative operations by calculating the sum and the count separately. One way of representing the two values is a map of two keys, the <code>:sum</code> and the <code>:count</code>. The value that represents the identity for our mean would be a sum of zero and a count of zero, or a map such as the following: <code>{:sum 0 :count 0}</code>.</p><p>The combine function, <code>mean-combiner</code>, returns the identity value when it's called without arguments. The two-argument combiner needs to add together the <code>:count</code> and the <code>:sum</code> for each of the two arguments. We can achieve this by merging the maps with <code>+</code>:</p><pre><code class="clojure">(defn mean-combiner
([] {:count 0 :sum 0})
([a b] (merge-with + a b)))
</code></pre><br /><p>The <code>mean-reducer</code> function needs to accept an accumulated value (either an identity value or the results of a previous reduction) and incorporate the new <code>x</code>. We do this simply by incrementing the <code>count</code> and adding <code>x</code> to the accumulated <code>sum</code>:</p><pre><code class="clojure">(defn mean-reducer [acc x]
(-> acc
(update-in [:count] inc)
(update-in [:sum] + x)))
</code></pre><p>The two functions above are almost enough to completely specify our mean fold:</p><pre><code class="clojure">(defn ex-5-13 []
(->> (load-data "data/soi.csv")
(r/map :N1)
(r/fold mean-combiner
mean-reducer)))
;; {:count 166598, :sum 1.4216975E8}
</code></pre><p>The result gives us the two numbers from which we can calculate the mean of N1, calculated in only one iteration over the data. The final step of the calculation can be performed with the following <code>mean-post-combiner</code> function:</p><pre><code class="clojure">(defn mean-post-combiner [{:keys [count sum]}]
(if (zero? count) 0 (/ sum count)))
(defn ex-5-14 []
(->> (load-data "data/soi.csv")
(r/map :N1)
(r/fold mean-combiner
mean-reducer)
(mean-post-combiner)))
;; 853.37
</code></pre>Fortunately, the values agree with the mean we calculated correctly before.<h3><a name="calculating_the_variance_using_fold"></a>Calculating the variance using fold</h3><p>Next let's examine a more complicated statistic: the variance, or $ S ^2 $. The variance measures the "spread" of values about a middle value and is defined as the <em>mean squared difference from the mean</em>:</p><p>$$ S ^2 = {1 \over n} \sum\limits_{i=1} ^n (x_i - \bar x) ^2 $$</p><p>where $ \bar x $ refers to the mean value of $ x $.</p><p>To implement this as a fold we might write something like this:</p><pre><code class="clojure">(defn ex-5-15 []
(let [data (->> (load-data "data/soi.csv")
(r/map :N1))
mean-x (->> data
(r/fold mean-combiner
mean-reducer)
(mean-post-combine))
sq-diff (fn [x] (i/pow (- x mean-x) 2))]
(->> data
(r/map sq-diff)
(r/fold mean-combiner
mean-reducer)
(mean-post-combine))))
;; 3144836.86
</code></pre><p>First, we calculate the mean of the series using the <code>fold</code> we constructed just now. Then we define a function of <code>x</code>, <code>sq-diff</code>, which calculates the squared difference of <code>x</code> from the mean. We map this over the squared differences and call our mean fold a second time to arrive at the final variance result.</p><p>Thus, we make <em>two</em> complete iterations over the data: firstly to calculate the mean, and secondly to calculate the difference of each <code>x</code> from the mean. It might seem that it's impossible to reduce the number of steps further and calculate the variance in only a single fold over the data. In fact, it <strong>is</strong> possible to express the variance calculation as a single fold. To do so, we need to keep track of three things: the count, the interim mean, and the sum of squared differences.</p><pre><code class="clojure">(defn variance-combiner
([] {:count 0 :mean 0 :sum-of-squares 0})
([a b]
(let [count (+ (:count a) (:count b))]
{:count count
:mean (/ (+ (* (:count a) (:mean a))
(* (:count b) (:mean b)))
count)
:sum-of-squares (+ (:sum-of-squares a)
(:sum-of-squares b)
(/ (* (- (:mean b)
(:mean a))
(- (:mean b)
(:mean a))
(:count a)
(:count b))
count))})))
</code></pre><p>The <code>variance-combiner</code> function is shown above. Its identity value is a map with all three values set to zero. The zero-arity implementation just returns this value.</p><p>The two-arity combiner needs to combine the <code>:count</code>, <code>:mean</code> and <code>:sums-of-squares</code> for both of the supplied values. Combining the counts is easy: we simply add them together. Combining the means is only marginally trickier: we need to calculate the weighted mean of the two means. If one mean is based on fewer records then this should count for less in the combined mean:</p><p>$$ \mu_{a,b} = {\mu_a n_a + \mu_b n_b \over n_a + n_b} $$</p><p>Combining the sums of squares is the most complicated calculation of all: as well as adding the sums of squares we also need to add a factor to account for the fact that the sum of squares from $a$ and $b$ were likely calculated from differing means.</p><p>The <code>variance-reducer</code> function is much simpler, and contains the explanation for how the variance fold is able to calculate the variance in one iteration over the data:</p><pre><code class="clojure">(defn variance-reducer [{:keys [count mean sum-of-squares]} x]
(let [count' (inc count)
mean' (+ mean (/ (- x mean) count'))]
{:count count'
:mean mean'
:sum-of-squares (+ sum-of-squares
(* (- x mean') (- x mean)))}))
</code></pre><p>For each new record, the interim mean <code>mean'</code> is re-calculated from the previous interim <code>mean</code> and current <code>count</code>. We add to the sum of squares the product of the difference between the means <em>before</em> and <em>after</em> taking account of this new record.</p><p>The final result is a map containing the count, the mean and the total sum of squares. As the variance is just the sum of squared differences divided by the count, our <code>variance-post-combiner</code> function is a relatively simple one:</p><pre><code class="clojure">(defn variance-post-combiner [{:keys [count mean sum-of-squares]}]
(if (zero? count) 0 (/ sum-of-squares count)))
</code></pre><p>Putting the three functions together yields the following:</p><pre><code class="clojure">(defn ex-5-16 []
(->> (load-data "data/soi.csv")
(r/map :N1)
(r/fold variance-combiner
variance-reducer)
(variance-post-combiner)))
;; 3144836.86
</code></pre><p>Thus, we have been able to calculate the variance in only a <strong>single iteration</strong> over our dataset.</p><h3><a name="mathematical_folds_with_tesser"></a>Mathematical folds with Tesser</h3><p>We should now understand how to use folds to calculate parallel implementations of simple algorithms. Hopefully, we should also have some appreciation for the ingenuity required to find efficient solutions that will perform the minimum number of iterations over the data too!</p><p>Fortunately the library Tesser (<a href='https://github.com/aphyr/tesser'>https://github.com/aphyr/tesser</a>) includes implementations for common mathematical folds including the mean and the variance. To see how to use Tesser, let's consider the covariance of two fields from the IRS dataset: the <em>"salaries and wages"</em> (A00200) the <em>"unemployment compensation"</em> (A02300) amounts.</p><h3><a name="calculating_covariance_with_tesser"></a>Calculating covariance with Tesser</h3><p>If two variables $ x $ and $ y $ tend to vary together, their deviations from the mean tend to have the same sign; negative if they're less than the mean, positive if they're above. If we multiply the deviations from the mean together, the product is positive when they have the same sign and negative when they have different signs. Taking the mean of these products gives a measure of the <strong>tendency of the two variables to deviate from the mean in the same direction</strong> for each given sample. This is referred to as their <em>covariance</em>.</p><p>The formula is shown below:</p><p>$$ {\rm cov(X,Y)} = {1 \over n} \sum\limits_{i=1} ^n (x_i - \bar x) (y_i - \bar y) $$</p><p>A covariance fold is included in <code>tesser.math</code>. Below, we're including <code>tesser.math</code> as <code>m</code> and <code>tesser.core</code> as <code>t</code>.</p><pre><code class="clojure">(require '[tesser.core :as t]
'[tesser.math :as m])
(defn ex-5-17 []
(let [data (into [] (load-data "data/soi.csv"))]
(->> (m/covariance :A02300 :A00200)
(t/tesser (t/chunk 512 data )))))
;; 3.496E7
</code></pre><p>The <code>m/covariance</code> function expects to receive two arguments: a function to return the $ x $ value and another to return the $y$ value from each input record. Since keywords will act as functions to extract their corresponding values from a map, we simply pass these in.</p><p>Tesser works in a similar way to Clojure's reducers but with some minor differences: reducers' <code>fold</code> takes care of splitting our data into subsequences for parallel execution. With Tesser however, we must divide our data into chunks explicitly. Since this's something we're going to do repeatedly, let's create a little helper function called <code>chunks</code>.</p><pre><code class="clojure">(defn chunks [coll]
(->> (into [] coll)
(t/chunk 1024)))
</code></pre><p>For most of the rest of this article, we'll be using the <code>chunks</code> function to split our input data into groups of 1,024 records.</p><h3><a name="commutativity"></a>Commutativity</h3><p>Another difference between Clojure's reducers and Tesser's folds is that Tesser doesn't guarantee input order will be preserved. Along with being associative, functions provided to Tesser's folds must also be commutative. A commutative function is one whose result is the same result if its arguments are provided in a different order.</p><p>For example:</p><p>$$ f(a,b) = f(b,a) $$</p><p>Addition and multiplication are commutative, but subtraction and division are not. Commutativity is a useful property of functions intended for distributed data processing because it lowers the amount of coordination required between subtasks. When Tesser executes a combine function it is free to do so on whichever reducer functions return their values first. As order doesn't matter, Tesser doesn't need to wait for the first to complete.</p><p>Let's rewrite our <code>load-data</code> function into a <code>prepare-data</code> function that will return a commutative Tesser fold. It performs the same steps (parsing a line of the text file, formatting the record as a map and removing zero ZIP codes) that our previous reducers-based function did, but it no longer assumes that the column headers will be the first row in the file: 'first' is a concept which implicitly requires ordered data.</p><pre><code class="clojure">(def column-names
[:STATEFIPS :STATE :zipcode :AGI_STUB :N1 :MARS1 :MARS2 ...])
(defn prepare-data []
(->> (t/remove #(.startsWith % "STATEFIPS"))
(t/map parse-line)
(t/map (partial format-record column-names))
(t/remove #(zero? (:zipcode %)))))
</code></pre><p>Now all the preparation is being done in Tesser, we can pass the <code>iota/seq</code> directly as input. This might not seem like a significant change, but it will be necessary if we are going to run our analysis on Hadoop, as shown in <a href='https://www.packtpub.com/big-data-and-business-intelligence/clojure-data-science'>Chapter 5, Clojure for Data Science</a>.</p><pre><code class="clojure">(defn ex-5-18 []
(let [data (iota/seq "data/soi.csv")]
(->> (prepare-data)
(m/covariance :A02300 :A00200)
(t/tesser (chunks data)))))
;; 3.496E7
</code></pre><p>The result is the covariance of the <em>"salaries and wages"</em> (A02300) amount and the <em>"unemployment compensation"</em> (A00200) amounts, but it's a hard number to interpret: the units are the product of the units of the inputs.</p><p>Because of this, covariance is rarely reported as a summary statistic on its own. A solution to make the number more comprehensible is to divide the deviations by the product of the standard deviations. This transforms the units to be standard scores and constrains the output to a number between -1 and +1 inclusive. The result is called <em>Pearson's correlation</em> or the correlation coefficient, often denoted $r$.</p><p>$$ r = {{\rm cov}(x, y) \over \sigma_x \sigma_y } $$</p><p>As the standard deviation is simply the square root of the variance, we've already covered all the necessary math to calculate the correlation coefficient using Tesser's folds. However, Tesser also includes a built-in function, <code>m/correlation</code>, for calculating the correlation coefficient:</p><pre><code class="clojure">(defn ex-5-19 []
(let [data (iota/seq "data/soi.csv")]
(->> (prepare-data)
(m/correlation :A02300 :A00200)
(t/tesser (chunks data)))))
;; 0.353
</code></pre><p>There's a modest, positive, correlation between these two variables. Whilst it may be useful to know that two variables are correlated, we can't use this information alone to make predictions. In establishing a correlation we have measured the <strong>strength</strong> and <strong>sign</strong> of a relationship, but not the <strong>slope</strong>, and knowing the expected rate of change for one variable given a unit change in the other is required to make predictions. What we'd like to determine is an equation that relates the specific value of one variable, called the <em>independent</em> variable, to the expected value of the other, the <em>dependent</em> variable.</p><h3><a name="the_linear_equation"></a>The linear equation</h3><p>Two variables, which we can signify as $x$ and $y$, may be related to each other exactly or inexactly. The simplest such relationship, between an independent variable labelled $x$ and a dependent variable labelled $y$, is a straight line expressed in the formula:</p><p>$$ y = a + bx $$</p><p>Where the values of the parameters a and b determine respectively the precise height and steepness of the line, $a$ is referred to as the <em>intercept</em> or <em>constant</em> and $b$ as the <em>gradient</em> or <em>slope</em>. For example, in the mapping between Celsius and Fahrenheit temperature scales $a=32$ and $b=1.8$. Substituting these values of $a$ and $b$ into our equation yields:</p><p>$$ y = 32 + 1.8x $$</p><p>To calculate 10º Celsius in Fahrenheit, we substitute 10 for $x$:</p><p>$$ y = 32 + 1.8(10) = 50 $$</p><p>Thus our equation has told us that 10º Celsius is 50º Fahrenheit, which is indeed the case. </p><h3><a name="simple_linear_regression_with_tesser"></a>Simple linear regression with Tesser</h3><p>Tesser doesn't currently provide a linear regression fold, but it does give us the tools we need to implement one. The coefficients for simple linear regression model—the slope and the intercept—can be calculated as a simple function of the variance, covariance and means of the two inputs:</p><p>$$ b= {\rm cov(X,Y) \over var(X) }$$</p><p>$$ a= \bar y - b \bar x $$</p><p>The slope $b$ is the covariance of $x$ and $y$ divided by the variance in $x$. The intercept is the value that ensures the regression line passes through the means of both series. Ideally, we'd be able to calculate each of these four variables in a single fold over the data. Tesser provides two fold combinators, <code>t/fuse</code> and <code>t/facet</code>, for building more sophisticated folds out of more basic folds.</p><h3><a name="tesser's_fuse_combinator"></a>Tesser's fuse combinator</h3><p>Where we have <strong>multiple folds to run in parallel</strong>, we should use <code>t/fuse</code>. For example, in the below example we're fusing the <code>m/mean</code> and the <code>m/standard-deviation</code> folds into a single one that will calculate both values at once:</p><pre><code class="clojure">(defn ex-5-20 []
(let [data (iota/seq "data/soi.csv")]
(->> (prepare-data)
(t/map :A00200)
(t/fuse {:A00200-mean (m/mean)
:A00200-sd (m/standard-deviation)})
(t/tesser (chunks data)))))
;; {:A00200-sd 89965.99846545042, :A00200-mean 37290.58880658831}
</code></pre><h3><a name="tesser's_facet_combinator"></a>Tesser's facet combinator</h3><p>Conversely, when we have the <strong>same</strong> fold to run on <strong>all the fields</strong> in the map, we should use <code>t/facet</code>:</p><pre><code class="clojure">(defn ex-5-21 []
(let [data (iota/seq "data/soi.csv")]
(->> (prepare-data)
(t/map #(select-keys % [:A00200 :A02300]))
(t/facet)
(m/mean)
(t/tesser (chunks data)))))
;; {:A02300 419.67862159209596, :A00200 37290.58880658831}
</code></pre>In the above code we use <code>select-keys</code> to fetch only two values from the record (A00200 and A02300) and calculate the mean value for both of them simultaneously.<h3><a name="linear_regression_with_fuse"></a>Linear regression with fuse</h3><p>Let's return to the challenge of performing simple linear regression. We have four numbers to calculate so let's fuse them together:</p><pre><code class="clojure">(defn calculate-coefficients [{:keys [covariance variance-x
mean-x mean-y]}]
(let [slope (/ covariance variance-x)
intercept (- mean-y (* mean-x slope))]
[intercept slope]))
(defn ex-5-22 []
(let [data (iota/seq "data/soi.csv")
fx :A00200
fy :A02300]
(->> (prepare-data)
(t/fuse {:covariance (m/covariance fx fy)
:variance-x (m/variance (t/map fx))
:mean-x (m/mean (t/map fx))
:mean-y (m/mean (t/map fx))})
(t/post-combine calculate-coefficients)
(t/tesser (chunks data)))))
;; [37129.529236553506 0.0043190406799462925]
</code></pre><p><code>fuse</code> very succinctly binds together the calculations we want to perform. In addition, Tesser allows us to specify a <code>post-combine</code> step to be included as part of the fold rather than handing the result off to another function to finalize the output. The <code>post-combine</code> step receives the four results in a map and calculates the slope and intercept from them, returning the two coefficients as a vector.</p><h3><a name="making_predictions"></a>Making predictions</h3><p>Having calculated the coefficients in the previous section, we simply have to substitute them into the linear model previously described to make predictions about our dependent variable given our independent variable:</p><p>$$ a=37129.52 $$</p><p>$$ b=0.0043 $$</p><p>$$ y = 37129.52+0.0043x $$</p><p>This equation provides us with a way to determine the value of our dependent variable, <em>"unemployment compensation"</em>, given our independent variable <em>"salaries and wages"</em> amount.</p><h2><a name="summary"></a>Summary</h2><p>In this article we've seen how to use Iota and Clojure's reducers together for folding over large quantities of data. We've also seen how the Tesser library implements a suite of mathematical functions as folds that operate in parallel and require only a single iteration over the dataset.</p><p>We've also learned about how to determine whether two variables share a linear correlation, and how to build a simple linear model with one independent variable. We learned the optimal parameters to the model by using Tesser to fuse together several calculations into one.</p><p>I hope you've found this article useful. In <a href='https://www.packtpub.com/big-data-and-business-intelligence/clojure-data-science'>Clojure for Data Science</a>, we also cover how to tell whether the parameters we've learned are a good fit for the data, and how to increase the number of parameters to our model to enable more accurate prediction as well as how to use both Tesser and Parkour libraries to scale our analysis to very large, distributed datasets with Hadoop.</p>
Sat, 12 Sep 2015 00:00:00 +0100
http://clojuredatascience.com/posts/2015-09-05-regression-classification-clustering-clojure.html
http://clojuredatascience.com/posts/2015-09-05-regression-classification-clustering-clojure.html
Clojure for Data Science: an Overview
<p>Below are slides from a talk given to the <a href='https://bristolclojurians.github.io/'>Bristol Clojurians</a> on April 21st 2015.</p><p>The talk was in two parts, each around an hour, and in each I attempted to give a practical overview of several key techniques presented in <a href='http://www.amazon.co.uk/gp/product/1784397180/ref=as_li_tl?ie=UTF8&camp=1634&creative=19450&creativeASIN=1784397180&linkCode=as2&tag=henrygarnerco-21'>Clojure for Data Science</a>, including linear & logistic regression and <em>k</em>-means clustering. I also gave a brief demonstration of visualization with <a href='http://quil.info/'>Quil</a> and of writing a Hadoop job with <a href='https://github.com/damballa/parkour'>Parkour</a> to make use of the <a href='http://mahout.apache.org/'>Mahout</a> machine learning library.</p><p><script async class="speakerdeck-embed" data-id="324a4d25b6d541dbadd9e0fb14dc1a51" data-ratio="1.37081659973226" src="//speakerdeck.com/assets/embed.js"></script></p><p>The content of the slides is available at <a href='https://github.com/henrygarner/clojure-data-science'>https://github.com/henrygarner/clojure-data-science</a>. x</p>
Sat, 05 Sep 2015 00:00:00 +0100
http://clojuredatascience.com/posts/2015-09-05-published.html
http://clojuredatascience.com/posts/2015-09-05-published.html
Published!
<p>It's been over a year after I began writing <a href='https://www.packtpub.com/big-data-and-business-intelligence/clojure-data-science'>Clojure for Data Science</a> and I'm thrilled to say that the book is finally published. You can pick up a copy from <a href='https://www.packtpub.com/big-data-and-business-intelligence/clojure-data-science'>the publisher's website</a> or from <a href='http://www.amazon.co.uk/gp/product/1784397180/ref=as_li_tl?ie=UTF8&camp=1634&creative=19450&creativeASIN=1784397180&linkCode=as2&tag=henrygarnerco-21'>Amazon</a>.</p><p>Anyone who has written a book will know that it is no small undertaking (I ought to have applied my usual "multiply by 3" rule of code estimation to the project at the outset). Many people have contributed along the way.</p><p>I'm grateful to <a href='https://twitter.com/econohammer'>Dan Hammer</a>, my Packt reviewer, for his valuable perspective as a practicing data scientist, and to those other brave souls who patiently read through the very rough early (and not-so-early) drafts. Foremost among these are <a href='https://twitter.com/EleonoreMayola'>Éléonore Mayola</a>, <a href='https://twitter.com/paulrabutcher'>Paul Butcher</a>, and <a href='https://twitter.com/derbex'>Jeremy Hoyland</a>. Your feedback was not always easy to hear, but it made the book so much better than it would otherwise have been.</p><p>Thank you to the wonderful team at <a href='http://www.mastodonc.com/'>MastodonC</a> who tackled a pre-release version of this book in their company book club, especially <a href='https://twitter.com/EleonoreMayola'>Éléonore Mayola</a>, <a href='https://twitter.com/jasonbelldata'>Jase Bell</a>, and <a href='https://twitter.com/elise_huard'>Elise Huard</a>. I'm grateful to <a href='https://twitter.com/fhr'>Francine Bennett</a> for her advice early on—which helped to shape the structure of the book—and also to <a href='https://twitter.com/otfrom'>Bruce Durling</a>, <a href='https://twitter.com/sw1nn'>Neale Swinnerton</a>, and <a href='https://twitter.com/mrchrisadams'>Chris Adams</a> for their company during the otherwise lonely weekends spent writing in the office.</p><p>Thank you to my friends from the machine learning study group: <a href='https://twitter.com/tansakuu'>Sam Joseph</a>, <a href='https://twitter.com/galfridus'>Geoff Hogg</a>, and <a href='https://twitter.com/bewt85'>Ben Taylor</a> for reading the early drafts and providing feedback suitable for Clojure newcomers; and also to <a href='https://twitter.com/snapington'>Luke Snape</a> and <a href='https://twitter.com/tcoupland'>Tom Coupland</a> of the <a href='https://twitter.com/BristolClojure'>Bristol Clojurians</a> for providing the opportunity to test the material out on its intended audience.</p><p>A heartfelt thanks to my dad, Nicholas, for interpreting my vague scribbles into the fantastic figures you see in this book, and to my mum, Jacqueline, and sister, Mary, for being such patient listeners in the times I felt like thinking aloud. Last, but by no means least, thank you to the Nuggets of Wynford Road, Russell and Wendy, for the tea and sympathy whenever it occasionally became a bit too much. I look forward to seeing much more of you both from now on.</p>
Sat, 05 Sep 2015 00:00:00 +0100
http://clojuredatascience.com/posts/2015-04-07-hello-world.html
http://clojuredatascience.com/posts/2015-04-07-hello-world.html
Hello, World!
<p>Welcome to the companion website for the book <strong>Clojure for Data Science</strong>. The book is currently being written by <a href='http://henrygarner.com'>Henry Garner</a> and will be published by <a href='https://www.packtpub.com/'>Packt</a> in the second half of 2015.</p><p>All code examples will be made available on the book's <a href='https://github.com/clojuredatascience/'>GitHub repo</a> for download. As publication approaches this site (and the related <a href='https://github.com/clojuredatascience/clojuredatascience.github.io/wiki'>wiki</a>) will be updated with excerpts and further information related to data science in general and Clojure in particular.</p><p>See you soon.</p>
Tue, 07 Apr 2015 00:00:00 +0100