Data visualization, by definition, involves making a two- or three-dimensional picture of data, so when the data being visualized inherently has many more dimensions than two or three, a big component of data visualization is dimensionality reduction. Dimensionality reduction is also often the first step in a big-data machine-learning pipeline, because most machine-learning algorithms suffer from the Curse of Dimensionality: more dimensions in the input means you need exponentially more training data to create a good model. Datacratic’s products operate on billions of data points (big data) in tens of thousands of dimensions (big problem), and in this post, we show off a proof of concept for interactively visualizing this kind of data in a browser, in 3D (of course, the images on the screen are two-dimensional but we use interactivity, motion and perspective to evoke a third dimension).
For the TL;DR crowd, here’s a demo of what we came up with, the source code on Github.
The behavioural datasets on which Datacratic’s platform operates are basically very large, very sparse binary matrices: you can think of a grid with millions of users running down the side and tens of thousands of behaviours running across the top, with a 1 in each cell where user U engaged in behaviour B and 0s everywhere else. Each user record thus can be thought of as a point in a high-dimensional space. If we had only three behaviours, this space would be three-dimensional, and there would only be 8 possible points in this space, like the corners of a cube. Because we operate on tens of thousands of behaviours, each user sits in one corner of a ten-plus-thousand-dimensional space.
This is hard to describe, hard to think about, very hard to picture, and very hard to efficiently run algorithms on, so one of the first steps in our machine-learning pipeline is to perform a Singular Value Decomposition, or SVD, on the data. The SVD helps us turn our ten-thousand-dimensional hypercube of corners into something a bit more manageable. After the SVD dimensionality reduction step, each user now occupies a point in a two-hundred-dimensional continuous space (i.e. they’re not all in a corner), and the coordinates of users that behave similarly to each other are close to each other in this new space. That sounds slightly easier to think about, and it’s certainly easier to run algorithms on, but 200 dimension is still at least 197 dimensions too many to actually make a picture.
To reduce the dimensionality even further, down to something we can actually look at, we use another algorithm called t-Stochastic Neighbour Embedding, or t-SNE, which was designed to do exactly this: take high-dimensional data and make low-dimensional pictures such that points close to each other in the high-dimensional space are also close to each other in the picture (check out our high-performance open-source implementation!). t-SNE can reduce the number of dimensions to two, so we can just make a scatter-plot of a sample of our users in any old tool, but we chose to reduce the number of dimensions to three instead, and used some exotic browser technology to make some fancy visuals. Relaxing the constraint of the algorithm to three dimensions from two should also help preserve more of the high-dimensional structure in the final output as well, so this wasn’t solely an aesthetic exercise.
The proof of concept which we ended up calling the Data Projector was built to see if we could interactively visualize a few thousand sampled points of the output of a server-side SVD/t-SnE pipeline in the browser using WebGL via Three.js instead of something like SVG via D3.js, which doesn’t make use of hardware acceleration in the browser and hence would struggle to display so many points. As the code on Github, the demo, and the video above show, the answer is most definitely yes. The interactivity in this case is that you can drag the cube around to get a different perspective on the data, and you can shift-drag in the right-hand orthographic view to select a prism-shaped volume.
In the video above and in the demo, each point represents a user. Points close to each other represent users that are similar to each other, in the sense that they behaved similarly. The colour of the points represents the output of yet another machine-learning algorithm called k-means clustering, which is used to group similar data points into clusters. Here we ran k-means in the high-dimensional space before running t-SNE with k=10, so we have grouped the users into 10 buckets based on similarity. You’ll notice that broadly, similarly-coloured users end up close together creating coloured clouds. This means that users that were close to each other in the high-dimensional space ended up close to each other in the three-dimensional visualization as well.
This proof of concept shows that it’s possible to interactively visualize the output of some heavy-duty server-side high-dimensional machine-learning algos in 3D in the browser. In this demo we’re manipulating a graph with thousands of points in real-time, leveraging hardware acceleration via WebGL. The performance is good enough that we can envision doing much more complicated operations which would permit us to interactively dig into the actual semantics of why one user is “similar” to another, all in the browser. This technique can also be applied to simpler two-dimensional scatterplots, while maintaining very good performance, far beyond what SVG- or canvas-based interactive visualization libraries can manage today.
⁂