BLOG

December 3, 2018

Announcing the Xanadu Quantum Software Competition

By Xanadu

Encouraging the use of quantum software across three areas: education, software development, and research — with multiple prizes of up to $1000 on offer.

Interested in quantum computing and quantum software, and want to use the latest quantum hardware? Have a cool idea for an educational YouTube video, Jupyter Notebook, research paper, or just itching to get your hands dirty with Strawberry Fields or PennyLane?

We’re excited to announce the Xanadu Quantum Software Competition, open to everyone — with prizes of $1000 for first place finishers.

The Xanadu Quantum Software Competition consists of three awards:

  1. Education award (for those who enjoy teaching and scientific communication),
  2. Software award (for those who enjoy tinkering with the cutting-edge of physics and technology), and
  3. Research award (for those who love pushing the boundaries of knowledge).

All categories include a first place prize of CAD$1000, and a second place prize of CAD$500, with finalists profiled on Xanadu’s social media and blog.

The entry deadline for all three awards is 30th August 2019 — plenty of time to get those creative juices flowing.

To submit your entries, go to https://pennylane.ai/competition.


About our software

PennyLane is the first dedicated library for quantum machine learning, leveraging actual quantum hardware to calculate gradients and perform machine learning and optimization. It can be used to apply well-known algorithms such as quantum approximate optimization algorithms (QAOA), variational quantum eigensolvers (VQE), quantum classifiers, quantum generative adversarial networks (QGANs), quantum neural networks (QNN), and many other hybrid quantum-classical models.

Not only that, but PennyLane allows you to run these machine learning algorithms directly on existing quantum hardware platforms — with plugins existing for Project Q, Qiskit (supporting the IBM Q hardware backend), and Strawberry Fields; with a Rigetti Forrest plugin coming soon.

Strawberry Fields, meanwhile, is a full-stack quantum software platform for photonic quantum computing, and ideal for quantum neural networks. Using its built-in suite of quantum simulators — implemented using NumPy and Tensorflow — allows full classical optimization and machine learning techniques to be applied to photonic quantum computation.


What are we looking for?

Almost anything! Your creativity is the limit.

If you love explaining and communicating topics in quantum computation, your submission could be in any form you feel best demonstrates and highlights an interesting phenomena in quantum computation using Strawberry Fields or PennyLane. This includes posters, Jupyter notebooks, a children’s book, a video, or even an animation.

Have a passion for machine learning, and getting curious about quantum machine learning? Send us a cool application that takes advantage of PennyLane’s automatic differentiation on near-term quantum hardware.

Or, if you’re involved in quantum information and quantum machine learning research, submit a link to your publicly listed preprint or published paper that uses Strawberry Fields or PennyLane, and you will be in the running for a cool $1000.

For some ideas to stimulate the brain, check out the PennyLane and Strawberry Fields documentation, read our whitepapers, play around with Strawberry Fields Interactive, and see what other cool things people have done in the Strawberry Fields Gallery and the PennyLane examples.

And don’t forget to submit!


We’re entering an exciting time in quantum physics and quantum computation: near-term quantum devices are rapidly becoming a reality, accessible to everyone over the Internet. This, in turn, is driving the development of quantum software — we need a way to program and simulate these quantum devices.

We can’t wait to see your submissions. For the terms and conditions and general competition rules, please see here.

November 20, 2018

How to train your QGAN: Quantum machine learning with PennyLane

By Nathan Killoran, Josh Izaac, and Christian Gogolin

Machine learning is frequently presented as one of the ‘killer apps’ of quantum computers. Unlike conventional digital computers, which employ classical bits, quantum computers manipulate physical systems at their most fundamental level, opening up a much richer structure for computation. Quantum computers natively process quantum information, which corresponds to vectors in very high-dimensional vector spaces.

Intuitively, this makes quantum computers very well suited to machine learning tasks. Indeed, sufficiently large fault-tolerant quantum computers will eventually give computational speedups for the basic linear algebra calculations prevalent in many machine learning algorithms.

The first generation of quantum hardware is now here. It is also accessible — often for free — over the cloud. This emerging quantum hardware has inspired quantum computing experts to rethink their previous mindsets. Instead of designing algorithms for perfect large-scale devices, we have begun to explore the awesome things that can already be done with the devices we have.

This has led to a flurry of new algorithms: quantum approximate optimization algorithms (QAOA), variational quantum eigensolvers (VQE), quantum classifiers, quantum generative adversarial networks (QGANs), quantum neural networks (QNN), and — more generally — the notion of hybrid quantum-classical models. These cutting-edge ideas are exciting, yet their potential is still largely unexplored. Quantum machine learning and quantum optimization have become very hot research areas lately, with a whirlwind of new work developed in the past few years.

Number of research papers released each year containing the term ‘quantum machine learning’ (source: Google Scholar)

At Xanadu, we have one of the best quantum machine learning research teams in the world, regularly contributing to the cutting edge of the field. Yet we recognized early the need to open up this hot area to a broader audience, taking it beyond the current circle of insiders and experts. We do this to accelerate the exploration of new algorithms, to expand the search for new near-term quantum machine learning algorithms, and to establish best-practices for building quantum and hybrid models.

Imagine the ideas that will emerge when anyone can train quantum computers as easily as they would train a neural network. With this in mind, we created a dedicated software library for quantum machine learning — PennyLane.


Enter PennyLane

In designing PennyLane, we took the ideas we like best from classical machine learning and ported them to work natively on quantum computers.

For instance, there has been a huge growth over the past half decade in the field of deep learning, the subfield of machine learning which deals with neural networks with many layers. One of the key drivers of this expansion is dedicated software libraries like Theano, TensorFlow, Pytorch, and more. The ability to rapidly implement and train models with these high-level libraries has given the field a huge boost, as heuristic methods that work well in practice are often discovered through trial and error.

These libraries have two main features in common:

(i) the ability to compute on special-purpose hardware (GPUs, TPUs); and

(ii) automatic differentiation, commonly implemented using the famous backpropagation algorithm.

For PennyLane, the special-purpose hardware is — obviously — quantum computing devices. However, there was no automatic differentiation software for quantum computations before PennyLane.


Backpropagating through PennyLane

Without getting too technical, automatic differentiation is the ability for software to automatically compute derivatives (or gradients) of computer code with respect to free parameters.

When you hear about “learning” or “training” in deep learning, the key ingredient is automatic differentiation. The derivative of a function tells us how that function changes if we adjust its parameter a tiny bit. With access to this derivative information, we can progressively modify and optimize a machine learning model to suit our needs. Commonly, automatic differentiation is implemented using the backpropagation algorithm, which builds up the gradient using the chain rule from calculus, looking piece-by-piece at all subroutines of the overall computation.

There are two paths for applying automatic differentiation techniques to quantum computations. The first is to simulate a quantum computation using a classical machine learning library. Earlier this year, we released Strawberry Fields, which does exactly that via its TensorFlow simulator backend. This strategy however can never give a quantum advantage, since it is inherently limited by the inefficiency of simulating quantum physics with classical computers.

The second strategy is to build a version of automatic differentiation that is naturally compatible with quantum hardware, and which will continue to work and become increasingly useful as quantum computers get progressively more advanced. But how can we compute gradients of quantum computing circuits when even simulating their output is classically intractable? The key insight is to use the same quantum hardware for both evaluating the quantum circuit and for computing its gradients.

Every gate in a quantum circuit carries out some transformation. It turns out that in most cases of interest, we can reuse the same gate, with only small modifications, to evaluate the derivative with respect to the parameter of this transformation. This per-gate derivative information can then be fed into the backpropagation algorithm. The backpropagation algorithm is still implemented on a classical computer, and it cannot see the inner workings of a quantum circuit (i.e., the intermediate quantum states of the circuit).

However, since the quantum devices can evaluate their own gradients efficiently, backpropagation never needs to penetrate the quantum circuit. PennyLane makes use of Python’s “autograd” library to perform automatic differentiation, providing the key additions that allow quantum computations to be differentiated. This means that PennyLane has full automatic differentiation support for classical, quantum, and hybrid computations.

Example sketch of the kind of hybrid classical-quantum model that is possible with PennyLane

Taking a stroll down PennyLane

With PennyLane’s easy-to-use interface (inspired by NumPy and TensorFlow), you can code up quantum computational circuits and bind them together with classical processing steps to build a full-blown hybrid computation.

PennyLane also includes a suite of gradient-descent based optimizers, which means that variational quantum circuits can be optimized in the same way as deep neural networks.

What if you want to move code you tested on a simulator to real quantum hardware, or even compare different hardware offerings? That’s also easy; PennyLane is fully hardware-agnostic. With a one-line change, you can ‘hot-swap’ the device running the quantum portion of your computational model, or even construct a model that utilizes multiple different quantum devices together.

Our aim is to support the entire growing ecosystem of near-term quantum hardware. For that PennyLane provides a plugin API, with which new quantum devices can easily be made QML-ready. Already, PennyLane has plugins for our home-grown Strawberry Fields quantum photonics API, the open-source ProjectQ framework for quantum computing, and hardware support for IBMQ. Coming soon: support for even more quantum hardware, including PennyLane plugins for Rigetti’s PyQuil and IBM’s Qiskit.

For more details on PennyLane, check out our documentation, and explore the various tutorials available. You can also check out the source code at our GitHub repository, and join the discussion at the PennyLane forums. We welcome code contributions — all users who contribute significantly to the code will be offered the opportunity to be listed as an author on the PennyLane whitepaper.


We are incredibly excited to share PennyLane with you. Join us in developing the next cutting-edge quantum machine learning algorithms!

October 30, 2018

Making light of quantum machine learning

By Juan Miguel Arrazola, Thomas R. Bromley, Josh Izaac, and Nathan Killoran

Emojis, light music, and other applications of quantum neural networks 🔥 🔥 🔥

At Xanadu, we’re working hard to make quantum computing and quantum machine learning a reality. We’ve recently developed a new light-based quantum neural network scheme which can be used to tackle challenging scientific and technical problems. But after a hard day’s work, sometimes you just want to let loose and get creative.

This blog post will be all about having fun with our new toys. Let’s start with a little “light music”: grab your headphones!

Making music with quantum light

Quantum neural networks can be used to transform simple quantum states into more complex ones. In our recent paper, we trained quantum circuits that could convert laser light into states of a fixed number of photons. By appropriately selecting the brightness and phase of the incoming light, we can generate states of one, two, or three photons. Instead, by inappropriately selecting the brightness and phase of the incoming light, we can generate completely new kinds of quantum states that can interpreted as musical instruments.

Quantum instruments.

How does a quantum state make a musical instrument? The frequency of a wave determines its pitch, i.e., whether it sounds like the note C or the note A. The shape of a sound wave dictates its timbre: the difference between a piano or an electric guitar playing the same note.

Each quantum state of light is characterized by a unique wavefunction whose shape determines the timbre of its associated instrument. We can ‘play’ a quantum state by generating a sound wave built by repeating the shape of the wavefunction at a desired frequency.

Our quantum neural network can therefore be used to discover new instruments: by interpolating between the wavefunctions of known quantum states, we can discover new wavefunctions, and thus, new instruments

We experimented with this idea by choosing different kinds of input laser light and playing the resulting quantum instruments. After a careful selection process, Juan Miguel presents to you his creations: the Heisenbass, Diraclarinet, and Hilbertsichord.

Heisenbass (left), Diraclarinet (middle), and Hilbertsichord (right)

We started a band, Schrödinger’s Lonely Hearts Club Band. For auditions, the task was simple: play the tune of “Strawberry Fields Forever”.

Here’s Nathan auditioning on the Heisenbass
Josh playing the Diraclarinet
and Tom on the Hilbertsichord

Listen to the three of them auditioning together:

The band was formed and we jammed some songs, recording a couple of audio snippets from these sessions. Hear the band musing about their time developing Xanadu’s quantum programming language:

Blackbird

and playing Naaaaaaaaa naaaaaaa naaaaaaa naanaanaaanaaaaaaaaaaaaaa


Tetrominos and Emojis

A quantum neural network can also be trained to generate images by transforming light from two input lasers. At the output, we count how many photons appear in each of the two channels (known as modes) and record these results in a grid. For example, the top-most square in the grid below appears when we detect one photon in the first mode and three photons in the second mode.

Every time the experiment is run, we can add an event to one cell of the grid. Eventually, a pattern builds up, and we are able to turn the grid into an image. What images shall we produce?

To start off, we trained our quantum circuit to output Tetris blocks. The goal was to output a different Tetris block for each choice of input to the circuit. You can imagine this as an unnecessarily complicated mechanism for generating the blocks in a game of Tetris — perhaps the first quantum Rube Goldberg machine.

Take a look at the results:

Did you know: the blocks in Tetris are called Tetrominos?

Let’s take a closer look at how the images are generated.

On the top, we see a simple target image. On the bottom, we show how the same image is built up from the output of the quantum network.

For each frame in the animation, an output is sampled from the quantum device, and the relative brightness of the corresponding pixel is increased slightly. Over time, these detections build up a pattern which is nearly identical to the desired image.

In fact, images are represented by encoding the intensity of pixels into in the amplitudes of the quantum state. This uses a quantum property called superposition. You might have heard of this phenomenon before — it is the central premise of the Schrödinger’s cat thought experiment.

The image below shows the result of training our quantum neural network to produce the cat emoji 🐱. What better way to give a nod to Schrödinger!


All of these examples were created using our quantum software library Strawberry Fields. Download it and see what fun things you can create!

September 19, 2018

Photonic quantum neural networks

By Nathan Killoran, Josh Izaac, Juan Miguel Arrazola, and Thomas R. Bromley

At Xanadu we are developing a photonic quantum computer: a device that processes information stored in quantum states of light. We are very excited by the possibilities that this approach brings. Photonic quantum computers naturally use continuous degrees of freedom — like the amplitude and phase of light — to encode information. This continuous, or analog, structure makes photonic devices a natural platform for quantum versions of neural networks.

How do we mimic a neural network using a photonic system? And where does quantum enter the game?

This summer, we released an exciting new paper which resolves these questions. We propose a photonic circuit which consists of a sequence of repeating building blocks, or layers. Layers can be composed, with the output of one layer serving as the input to the next. These photonic layers are akin to the layers which appear in classical neural networks.

Classical nets take an input x, multiply it by a weight matrix W, add a bias b, and pass the result through a nonlinear function (such as tanh or ReLU):

 

 

The basic layer unit of a classical neural network, which performs the transformation x→Φ(Wx+b). Using the singular-value decomposition, W can equivalently be given by two orthogonal matrices O1, O2, and a diagonal matrix Σ.

 

Our quantum layer mimics this functionality using photonic quantum gates: interferometers (made from phase shifters and beamsplitters), squeezing and displacement gates, and a fixed nonlinear transformation. These are the same gates that are used to build a photonic quantum computer, so our quantum neural network architecture has all the power of quantum computers.

 

The basic layer unit of a photonic quantum neural network. Gates are coloured to indicate which classical component they are related to.

 

The quantum neural network retains strong ties to classical neural networks. In fact, the quantum version can be used to run the classical version, by using the quantum net in a way which does not generate any quantum weirdness (superposition, entanglement, etc.). The similarity is illustrated through the colouring of gates in the two images above. The interferometers and squeezing gates are connected to the weight matrix, the displacement gates with the bias, and the quantum nonlinearity with the classical nonlinearity.

We trained the quantum neural network to do several tasks: curve fitting, fraud detection, a classical-quantum autoencoder, and generating images. We will highlight some of these cool applications in a future blog post. In the meantime, we’d like to share one particular and powerful use of photonic quantum neural networks.

 

Learning quantum states and quantum gates

While it is easy to simulate arbitrary quantum states of light using the Strawberry Fields software package, sometimes we forget how much of a challenge generating them can be for quantum computing researchers — who might spend days rearranging an equation to only end up back where they started. As reported in another recent research paper, the quantum neural network architecture we have pioneered can help in these cases. Using training methods from machine learning, we can optimize a quantum neural network circuit to produce arbitrary quantum states. Once we have learned the correct parameters, this state-preparation subroutine can then be reused within other quantum circuits or algorithms.

A quantum computing expert might takes weeks or months to craft such a circuit, while the machine learning approach can find solutions on a timescale of hours. This can dramatically accelerate the research and development process, potentially leading to breakthroughs in quantum computing.

For example, one tricky problem in quantum experiments is the creation of single-photon states — currently done via a random process known as spontaneous parametric down-conversion. We trained a quantum neural network which can produce on-demand single photon states using a fixed set of quantum gates. Quantum states of light can be represented by smooth landscapes called Wigner functions, so we can visualize the state output by our quantum neural network during training with a 3D animation:

 

The output of the quantum neural network at different stages of training. In these plots, we always begin from a fixed starting state (shown by the initial peak, called a Gaussian). We then gradually learn to output a single photon state (shown here by the final shape with a red trough at the center).

 

We can even do this with more complicated states, such as the so-called Schrödinger cat states, or the ON state — an important state in photonic quantum computing, used to construct various quantum gates.

 

Learning a Schrödinger cat state

 

Learning an ON state

Going one step further, our paper shows how to implement quantum gates using the quantum neural network architecture. Gates are another important ingredient in quantum computation, giving us a tool to control how quantum systems evolve. All quantum algorithms, including the famous Shor’s algorithm, require gates to function, yet it can be hard to work out how to implement them physically. Our approach automates this procedure.

 

To visualize this process, we can use the fact that gates in quantum computing are unitary matrices, with complex-valued entries. We can thus depict a gate graphically by colouring the real and imaginary entries of the corresponding matrix according to their magnitude.

 

A random four-dimensional gate U, depicted using the real and imaginary parts of the corresponding unitary matrix.

The transformation performed by our quantum neural network archicture can also be represented with this simple colouring scheme. The image below demonstrates how the quantum neural network transformation — initially random — is progressively trained to match a specific target gate.

 

Each frame in the above animation corresponds to the transformation carried out by a quantum neural network at a particular step in training. At the end of training, the transformation closely matches the desired one given above by U.

 

All the code for our state and gate learning is available on GitHub, and uses Python, the machine learning framework TensorFlow, and our quantum simulation software Strawberry Fields. Download it and see what states you can generate!

July 3, 2018

New machine learning and quantum chemistry apps for Strawberry Fields

By Josh Izaac

Introducing OpenFermion support and the Quantum Machine Learning Toolbox (QMLT)

Since our last update, we have been hard at work improving Strawberry Fields, our photonics-based quantum software platform; this includes new features for decomposing optical circuits into the continuous-variable gate set. It has also been great to see the burgeoning community growing around Strawberry Fields, creating content ranging from tutorials that help us understand Bell correlations in continuous-variable (CV) systems to quantum battleship games. We are also incorporating feedback we have received from users — come say hi on our Slack channel if you haven’t already!

Behind the scenes, Strawberry Fields is an integral part of our research workflow. Our latest paper, Continuous-variable quantum neural networks, uses Strawberry Fields to demonstrate a new architecture for quantum neural networks — including a neat example where a quantum neural network is trained to generate Tetris blocks, or “Tetrominos”.

The potential applications of quantum computing are huge, and our goal with Strawberry Fields is to make them as accessible as possible — whether you are a quantum physicist, chemist, machine learning scientist, or just having a bit of fun. To that end, we are delighted to introduce two new applications that build on the Strawberry Fields platform: SFOpenBoson and the Quantum Machine Learning Toolbox (QMLT).


OpenFermion and SFOpenBoson

The quantum simulation of photons and other bosons is a natural fit for Strawberry Fields and the photonic hardware we are developing at Xanadu. We are thrilled to announce that this is now even more accessible — we have joined forces with the Google Quantum A.I. research team to introduce bosonic systems to OpenFermion, the collaborative open-source chemistry package for quantum computers.

Not only that, but bosonic systems constructed in OpenFermion can be simulated in Strawberry Fields via our new SFOpenBoson plugin — no prior knowledge of quantum circuits or decompositions required! We handle that for you behind the scenes, and allow you to view which quantum gates were applied.

For example, quantum simulation of the Bose-Hubbard model can be done in as little as 6 lines of code:

Bose-Hubbard simulation performed in Strawberry Fields, using a Hamiltonian defined in OpenFermion.

OpenFermion is the definitive quantum chemistry library for quantum computation, and we are excited to be part of a collaboration that includes companies on the forefront of quantum computing, such as Google, D-Wave, and Rigetti.

“Many important physical phenomena in electronic structure arise due to interactions between bosons (e.g., photons, phonons) and fermions (e.g., electrons).” says Ryan Babbush, the lead researcher of OpenFermion at Google Quantum A.I.

“The introduction of tools for representing bosonic systems adds important new functionality to OpenFermion, and meaningfully extends the scope of the library.”

Have a read of the Strawberry Fields section in the OpenFermion paper, and check out our SFOpenBoson documentation and tutorials to see how you can use OpenFermion in conjunction with Strawberry Fields.


Quantum Machine Learning Toolbox

Quantum machine learning is a rapidly advancing area, with applications stretching across multiple disciplines. We believe everyone — no matter your machine learning prowess — can take advantage of this functionality in Strawberry Fields. To help lessen the learning curve, we are delighted to introduce the Quantum Machine Learning Toolbox (QMLT) — a Strawberry Fields application that enhances the core machine learning functionality with useful tools, functions, and abilities.

The toolbox supports a number of things that make your life easier:

  • Easily set up optimization, supervised, and unsupervised learning tasks
  • Run and score trained circuits, predict new inputs, and compute the accuracy on a training set
  • Use different optimizers, including numerical and automatic methods
  • Visualize and log the cost function and parameters during training (see image)
  • Include regularization
  • Do a warm start with pretrained models.

The QMLT integrates with Strawberry Fields and quantum circuits, making complicated machine learning exercises simple to define and run. The built-in numerical learner even opens up all three SF simulator backends for machine learning, with built-in live plots so you can track your optimization progress in real time.

Live plotting interface of the QMLT numerical learner

The Quantum Machine Learning Toolbox is available right now at our GitHubrepository, with online documentation available here. Check out the docs for examples covering optimization, supervised, and unsupervised learning. You can get started by reading our introduction to quantum variational circuits, then have a go working through some of the curated machine learning and optimization tutorials.


We hope you enjoy using these new tools and applications; if you do any cool projects or research, reach out to us and we’ll post them in the Strawberry Fields gallery. We have more exciting things in the works for Strawberry Fields — stay tuned!

June 13, 2018

Quantum Machine Learning 1.0: A big future for small devices

By Maria Schuld

Quantum machine learning is a new buzzword in quantum computing. This emerging field asks — amongst other things — how we can use quantum computers for intelligent data analysis. At Xanadu we are very excited about quantum machine learning and spend a fair amount of time thinking about it. Here is why.

First of all, it is important to note that quantum machine learning is very young, meaning that it is not yet clear what results, and commercial applications, to expect from it. This was demonstrated at the “Quantum meets Industry” panel at the quantum machine learning conference in Bilbao, Spain. When asked whether the time is ripe for commercial investments into quantum machine learning, the experts from companies such as IBM, Microsoft and NASA, were noticeably careful with their answers. Still, almost every company involved in quantum computing today, including the representatives in the panel, has a machine learning group.

If even the ‘big players’ are struggling to make definite statements about the — let’s say 5-year — outlook on using quantum computers for machine learning tasks, should quantum computing startups like Xanadu get on board? We think the answer is yes and want to put three arguments forward:

  1. Early-generation quantum devices are promising newcomers to the growing collection of AI accelerators, thereby enabling machine learning.
  2. Quantum machine learning can lead to the discovery of new models and thereby innovate machine learning.
  3. Machine learning, and quantum machine learning in particular, will increasingly permeate all aspects of quantum computing, redefining the way we think about quantum computing.

Let us go through these points one by one.

1. Enable machine learning

Early-generation quantum devices vary in their programming models, their generality, the quantum advantage they promise, and the hardware platforms that they run on. Across the board, they are very different from the universal processors that researchers envisioned when the field started in the 1990s. For machine learning, this may be a feature rather than a bug.

Quantum devices as special-purpose AI accelerators

An example of a “quantum ASIC” is an integrated nanophotonics chip that implements Boson sampling for a fixed number of modes (here corresponding to yellow light beams)Boson sampling quickly becomes intractable on classical computers if we increase the number of modes.

Many current quantum technologies resemble special-purpose hardware like Application-Specific Integrated Circuits (ASICs), rather than a general-purpose CPU. They are hardwired to implement a limited class of quantum algorithms. More advanced quantum devices can be programmed to run simple quantum circuits, which makes them more similar to Field-Programmable Gate Arrays (FPGAs), integrated circuits that are programmed using a low-level, hardware-specific Hardware Description Language. In both cases, an intimate knowledge of the hardware design and limitations is needed to run effective algorithms.

Schematic drawing of an FPGA (left) and IBM’s 16 qubit quantum chip (right, adapted from here). Near-term quantum devices can be compared to FPGAs, in that they also consist of locally programmable gates with a hardware-oriented programming language.

ASICs and FPGAs find growing use in machine learning and artificial intelligence, where their slim architectures reduce the overhead of a central processor and naturally suit the task they specialize in. If current quantum technologies resemble this classical special-purpose hardware, they could find applications in machine learning in a similar fashion. And this even without universal quantum computing and exponential quantum speedups.

“Quantum technologies may eventually have a place in the mix of AI hardware as we develop newer and newer techniques to advance towards artificial general intelligence.”

Taking a look at the most advanced AI solutions reveals that they already use a blend of technologies. More and more computation is done on special-purpose devices located at the edge, where technology interacts with its environment (think of fingerprint recognition for unlocking a phone or smile detection in a camera). At the other end of the spectrum, calculations are done on GPU clusters (for instance, traffic routing or tagging photos). As a matter of fact, a modern GPU is already a technology blend in itself: the latest Volta chips by Nvidia include low-precision ASICs called Tensor Cores, designed specifically to accelerate the training of neural networks. Google follows a similar path with their Tensor Processing Units (TPUs) that are designed to support the TensorFlow machine learning framework. In short, AI has already embraced heterogeneity. Quantum technologies may eventually have a place in the mix of AI hardware. And this mix has to be as strong as possible if we want to advance towards artificial general intelligence.

Finally, hardware can significantly shape the advancement of software. In the 2010s, the use of GPUs contributed to the renaissance of neural network models (that have been around for decades but were largely discarded as untrainable). Similarly, accelerating quantum technologies could make their very own contribution to lifting specific machine learning methods into the realm of the doable, or even of the cutting-edge. This is particularly true for methods that are considered too hard to train with classical hardware and which were superseded by more convenient competitors.

Future artificial intelligence and machine learning applications will need multiple hardware platforms, where every component is responsible for certain subtasks. Early-generation quantum devices could find their place on the low-generality (but high-speed) end of AI accelerators. One day, an error-corrected general purpose quantum processing unit (QPU) could extend the spectrum towards the right.

What quantum computers are good at

If early-generation quantum devices can be thought of as special-purpose AI accelerators, what exactly can quantum computers contribute to machine learning and AI? Why would we want to use “quantum ASICs”? Let’s look at a selection of exciting candidate tasks, namely optimization, linear algebra, sampling, and kernel evaluations.

Optimization. Just like in machine learning, optimization is a prominent task in quantum physics. Physicists (and quantum chemists) are typically interested in finding the point of lowest energy in a high-dimensional energy landscape. This is the basic paradigm of adiabatic quantum computing and quantum annealing. To no surprise, one of the first tasks for quantum computers investigated in the context of machine learning was optimization. The D-Wave quantum annealer, a special-purpose device that can solve so-called quadratic unconstrained binary optimization problems, was used as early as 2008 to solve classification tasks. More recently, the hybrid quantum-classical technique of variational circuits has been proposed. There, a quantum device is used to evaluate a hard-to-compute cost function, while a classical device performs an optimization based on this information.

Linear Algebra. When speaking about potential exponential quantum speedups for machine learning, people usually refer to the inherent ability of quantum computers for executing linear algebra computations. There are many subtleties to this claim, and its prospect with regards to hardware in the near term is not always clear. One of the bottlenecks is data encoding: to use a quantum computer as a kind of super-fast linear algebra enabler for large matrix multiplications and eigendecompositions (not unlike TPUs), we have to first “load” the large matrix onto the quantum device, a procedure that is highly non-trivial.

If we interpret the matrix describing a quantum gate as a linear layer of a neural network, we can visualize how the gate would connect inputs and outputs (which are ultimately the amplitudes of the quantum state). A single qubit gate can transform an exponentially large vector, yet in a highly symmetric fashion. Left: Arbitrary single qubit gate applied to the third out of five qubits. Right: Arbitrary two qubit gate applied to the third and fourth out of five qubits. Each shade of a color stands for a different “weight”.

However, there may be near-term benefits in understanding quantum computers as fast linear algebra processing units. Mathematically speaking, a quantum gate executes a multiplication of an exponentially — or even infinitely — large matrix with a similarly large vector. Specific costly linear algebra computations — namely those corresponding to quantum gates — can be therefore be done in a single operation on a quantum computer. This perspective is leveraged when building machine learning models out of quantum algorithms, for example when we think of an quantum gate as a (highly structured) linear layer of an enormous neural network.

Sampling. All quantum computers can be understood as samplers that prepare a special class of distributions (quantum states) and that sample from these distributions via measurements. A very promising avenue is therefore to explore how samples from quantum devices can be used to train machine learning models. This has been investigated for Boltzmann machines and Markov logic networks, where the so called Gibbs distribution — which is inspired by physics and hence comparably easy to realize with a physical system — plays an important role.

Every quantum computer is fundamentally a sampler that starts with a simple probability distribution over all possible measurement outcomes, computes a more complicated distribution, and samples an outcome via a measurement. Quantum devices are therefore interesting assistants for sampling-based training, for example with Boltzmann machines.

“We should think of early-generation quantum computers as small, partially programmable special-purpose devices that can take over costly jobs for machine learning which naturally suit them.”

Kernel evaluation. One very recent idea from Xanadu illustrates how there are more specific tasks in machine learning that could be taken over by quantum devices. Kernel methods use machine learning models based on a distance measure between data points which is called a kernel. Quantum devices can be used to estimate certain kernels, including ones that are difficult to compute classically. The estimates from the quantum computer can be fed into a standard kernel method — such as a support vector machine. Inference and training are done purely classically, but augmented with the quantum special purpose device.

The idea of “quantum kernels” is to use the quantum device only to compute kernels of data points, by estimating the inner product of two very high-dimensional quantum states. The kernel estimates can then be fed into a classical machine learning model for training and prediction. The figure is taken from this paper.

In summary, we should think of early-generation quantum computers as small, partially programmable special-purpose devices that can accelerate certain tasks in machine learning, just like the way GPUs enabled deep learning.

2. Innovate machine learning

Besides enabling pre-existing machine learning techniques, quantum machine learning potentially has a lot more to offer. Recently, a growing number of physicists trained in the methods of quantum theory and quantum computing have begun to think about machine learning. Having physicists enter machine learning has proven fruitful in the past — just think of the physicist John Hopfield who introduced his intimate knowledge of the Ising model into machine learning and created what is now known as associative memory in Hopfield networks. Quantum computing can lead to entirely new machine learning models. These new models are tailor-made for quantum devices, and may turn out to be something that works well, but which the machine learning community has simply never thought up. Let us demonstrate this with two examples that are actively investigated in quantum machine learning.

Sampling from quantum distributions

Results from Amin et al. (2018) showing the KL-divergence between the target and model distribution of a classical Boltzmann machine (BM) and two versions of a quantum Boltzmann machine (bQBM and QBM) during training. After a few iterations, both quantum models show better results than classical model. [Figure has been slightly adapted.]

It was mentioned above that quantum devices are good at sampling. For example, quantum annealers can be used to approximately sample from a Gibbs distribution to train Boltzmann machines. But this is not straightforward, since the quantum device actually prepares a quantum Gibbs distribution. Instead of trying to “make things classical”, researchers investigated what happens if we use the natural quantum distribution. It turns out that in some cases, “quantum samples” can be very useful for training, as shown in the figure on the left.

“Discovering new machine learning models is similar to searching for gold on a yet unknown island. In the case of quantum machine learning, we have found some promising signs of gold at the first beach, which is why we are building better expedition gear and venturing further — excited by what we might find.”

Variational quantum circuits

As a second example, consider a programmable quantum device — where “programmable” refers to some device parameters that can be tuned to change the specifications of an otherwise fixed computation. We set some of these parameters to the values of input data x, and associate other parameters as trainable variables θ. The device ultimately gives us some outputs y = f(x, θ) that depend on inputs and variables. Such a quantum device (and this description is really very generic) implements a supervised learning model. This model is sometimes called a variational classifier, relating it to the concept of variational (i.e., trainable) quantum circuits. In a similar way we can construct unsupervised models.

A programmable quantum device can be interpreted as a supervised machine learning model that computes y=f(x, θ). The trainable parameters θ can be learned via classical optimization. The figure is taken from this paper.

The function f that the quantum device computes can be very specific to its hardware architecture, how parameters enter the computation, and how one relates variables, inputs, and outputs to the quantum algorithm. However, altogether we get a “quantum model”. Importantly, if we do not know how to simulate the quantum model with a classical computer, we have not only a new ansatz to do machine learning, but also one that can only be executed with a quantum device. The emerging literature on variational circuits shows how to train such “hardware-derived” models with classical computers, and groups around the world are currently busy investigating the power and limits of such quantum models.

Discovering new machine learning models is similar to searching for gold on a yet unknown island. In the case of quantum machine learning, we have found some promising signs of gold at the first beach, which is why we are building better expedition gear and venturing further — excited by what we might find.

3. Redefine the way we think about quantum computing

There is a third, more “behind-the-scenes” reason why we think that quantum machine learning is essential. Quantum machine learning, and its central subtask, optimization, are not only subfields of quantum computing, they are increasingly becoming approaches to quantum computing itself. As such, they have the potential to redefine the way we think about quantum computing. This holds for software design, hardware development and applications that rely on quantum computing.

Quantum machine learning is not only a potential application for quantum computing, but a way of thinking that permeates all of its aspects.

Quantum software design

So far, quantum algorithms are carefully composed by people who have a deep knowledge of the tricks of the trade. And even the “bible” of quantum computing, the textbook of Michael Nielsen and Isaac Chuang, remarks that “coming up with good quantum algorithms seems to be a difficult problem”. But quantum algorithms could also be learned.

Consider for example the preparation of resource states. Resource states feature widely, i.e., in continuous-variable applications or error correction with magic states, where the computation relies on a specific state to be prepared as an input. Oftentimes, the algorithm to prepare the initial state is unknown. However, given a device with a certain type of gates, we can let the computer “learn” a gate sequence that prepares the desired state on the specific hardware. Likewise, entire quantum experimental setups (i.e., to generate highly entangled quantum systems) have been designed by machine learning approaches.

“The ideas of machine learning can transform the way we do quantum algorithmic design: Instead of composing algorithms, we can let the device learn them.”

Quantum hardware development

Machine learning is becoming increasingly popular for the post-processing of measurements in quantum computing. One day, the result of any quantum computation may rely on machine learning methods.

Intimate knowledge of machine learning can also help to build a quantum computer. Building a quantum computer will generate large amounts of labeled and unlabeled data. For example, data is generated when reading out the final quantum state of the device, when assessing the performance of gates, or when it comes to estimating measurement results. Quantum machine learning has a very active subfield, in which classical machine learning is used as a method to make sense of data produced by quantum experiments in the lab. Machine learning systems could easily become a standard component of quantum hardware.

“Machine learning may one day be a standard technique to read out the results of a computation with a quantum device.”

Applications

Quantum machine learning techniques are also closely tied with a variety of application areas. Quantum chemistry, for instance, is likewise interested in minimizing high-dimensional and difficult cost functions, e.g., to find the lowest energy configurations of molecules for drug discovery or material science. Quantum computers can be used to tackle these problems, with methods like the variational quantum eigensolvers mentioned above (see for example this recent result which scaled to a water molecule on an extremely noisy quantum computer).

Since machine learning and quantum chemistry are both heavily based on optimization, it is not surprising that they can both leverage similar quantum algorithms. A variational quantum eigensolver is, in essence, the same algorithm as a variational classifier, which was introduced as an innovative way to use quantum computers for machine learning. Understanding gained from the machine learning side will translate to new insights on the chemistry side. Good quantum machine learning algorithms will therefore have immediate consequences for other quantum applications based on data and optimization.

In summary, the potential of quantum machine learning to enable and innovate future AI applications, as well as to contribute towards the development of the field of quantum computing itself are three reasons why quantum machine learning may have a big future when it comes to small-scale quantum devices.

This opinion piece is signed by: Maria Schuld, Nathan Killoran, Thomas Bromley, Christian Weedbrook, Peter Wittek

May 7, 2018

Dreaming up new materials with quantum computers

By Pierre-Luc Dallaire-Demers

Billions of dollars are devoted to designing, producing, and refining materials and molecules for applications in many sectors of the world economy [1]. With the vast amount of data now available, can we automate the discovery of new materials with tailored physical and chemical properties?

In principle, yes!

One promising solution is to use generative adversarial networks [2]. As shown in the figure, the idea is simple: train a neural network — called the generator — to output candidate materials with certain desired properties. To give the network some flexibility, we also provide some unstructured, randomly chosen inputs. Its task is to convert these unstructured random inputs into a selection of new materials with the target properties. Of course, the generator does not physically create the materials, but rather simulates them. In addition to proposing materials, the generator can also provide the physical protocol for fabricating them.

A second neural network — called the discriminator — is tasked with training the generator. The discriminator judges whether a given example comes from real experimental data or from the generator. If the data comes from a real source, the discriminator outputs ‘real’ and if the data comes from the generator, it outputs ‘fake’.

 

Illustration of the workflow used to train a quantum generative adversarial network with the goal, in this example, of creating new materials for solar cells.

 

Illustration of the workflow used to train a quantum generative adversarial network with the goal, in this example, of creating new materials for solar cells.
The generator neural network is trained in an adversarial game with the discriminator. The generator must fool the discriminator into wrongly classifying its fake outputs as ‘real’. At the beginning of the training, it is easy to imagine that the generator only outputs random information and the discriminator randomly classifies its input as ‘real’ or ‘fake’. However, the discriminator can improve its performance since the real data — which can be highly structured or even human-readable text — is highly distinguishable from the strings of random bits at the output of the untrained generator. As the discriminator starts to distinguish between random noise and structured data, the generator can follow the gradient of the discriminator, learning to generate data which is more likely to fool it. At this point, the discriminator has to discover other features to accomplish its task and the generator continues the adversarial game by improving its ability to generate those features. In theory, at the end of the game, the generator will produce materials which closely resemble the data used for training. By changing the input parameters, the generator can be used to create completely novel materials, as well as detailing their fabrication process!

This whole idea could work in principle, but in practice we would run into a very fundamental problem. The properties of the materials and molecules in our universe are determined by their microscopic constituents, which obey the laws of quantum mechanics. Ultimately, the stability of matter itself is a direct consequence of the fact that the physical properties of fundamental particles like electrons and nuclei are not expressed with probabilities, but with complex probability amplitudes. The interference of electronic amplitudes around nuclei is the mechanism which ensures that electrons do not collapse onto nuclei, countering the electrostatic attraction. The properties of the interference pattern determine, to various degrees, the large scale properties of a given material: whether it is an insulator, a conductor, a semiconductor, a superconductor, any kind of magnets. In fact, all properties of all matter, from the smallest molecules all the way up to neutron stars, are determined by interference! Nevertheless, accurately computing the interference patterns of complex probability amplitudes is difficult. In fact, it is so difficult that we must double the size of a computer each time we want to add a new quantum particle to a simulation. Therefore, it would be extremely difficult for a generative adversarial network coded on a classical computer to synthesize fundamentally quantum phenomena and generate revolutionary new molecules.

This difficulty can be turned into a powerful lever if we instead try to use the quantum behavior of the building blocks of matter to execute quantum calculations [3]. This is precisely why we are building a quantum computer at Xanadu. Universal quantum computers can simulate all physical phenomena in a way which is exponentially more efficient than current-day classical computers. By using universal quantum circuits, it is possible to extend classical adversarial networks to the quantum domain and to unlock the full power of quantum computers. In two new papers [4,5] with my colleagues Nathan Killoran, Seth Lloyd, Christian Weedbrook and I, we define these ideas for the first time: Quantum generative adversarial networks and quantum adversarial learning!

This kind of algorithm, being fundamentally quantum in nature, has the potential to be exponentially more efficient than its classical counterpart at representing and generating highly correlated data such as for our example in materials design. We can also imagine a future where more complicated optimization tasks would include minimization of production and logistical costs, as well as macro-economical quantities such as market supplies and the price of various financial derivatives. We expect that quantum machine learning will also be leveraged to improve our ability to understand the training of these networks.

[1] Aspuru-Guzik, A., Lindh, R. and Reiher, M., 2018. The Matter Simulation (R)evolution. ACS central science, 4(2), pp.144–152.

[2] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y., 2014. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680.

[3] Feynman, R.P., 1982. Simulating physics with computers. International journal of theoretical physics, 21(6–7), pp.467–488.

[4] Dallaire-Demers, P.L. and Killoran, N., 2018. Quantum generative adversarial networks. arXiv preprint arXiv:1804.08641.

[5] Lloyd, S. and Weedbrook, C., 2018. Quantum generative adversarial learning. arXiv preprint arXiv:1804.09139.

April 11, 2018

Introducing Strawberry Fields

By Josh Izaac

Here at Xanadu, we have some of the brightest minds tackling the problem of practical quantum computation. As a full-stack quantum startup, we take a three-pronged approach to quantum computing:

  • Hardware: Our experimental physicists are working around the clock to develop an on-chip quantum photonic processor. Stay tuned — we’ll have some exciting news to share soon.
  • Applications and algorithms: Our algorithms team has already started publishing results across diverse fields such as quantum chemistry, graph theory, machine learning and more, while simultaneously making advances in photonic quantum computing. Check out our recent papers if you haven’t already.
  • Software and simulationsUnderpinning the work of our algorithms team is the ability to easily simulate our quantum photonics system. This is a hugely important component, allowing us to quickly flesh out ideas, and discover and probe interesting and unexpected behaviour.

As part of these efforts, we have developed a full stack software solution for simulating quantum photonics and continuous variable quantum computing. And that’s not all — we have also integrated support for Tensorflow, creating a framework that combines the latest advances in deep learning and machine learning with quantum computation.

The best part? Our framework is now open-sourced and available for anyone to use and play with.

Introducing Strawberry Fields and Blackbird.

An open-source full-stack quantum software platform for photonic quantum computing.

Using Strawberry Fields to simulate quantum teleportation from scratch.
  • Implemented in Python for ease-of-use, Strawberry Fields is specifically targeted to continuous-variable quantum computation. Quantum circuits are written using the easy-to-use and intuitive Blackbird quantum programming language.
  • Powers the Strawberry Fields Interactive web app, which allows anyone to run a quantum computing simulation via drag and drop. Quantum computing has never been simpler.
  • Includes a suite of quantum simulators implemented using NumPy and Tensorflow — these convert and optimize Blackbird code for classical simulation.
  • Future releases will target experimental backends, including photonic quantum computing chips.

These last two features are the most thrilling to us here at Xanadu. Not only is Strawberry Fields the first quantum computing simulator to include gradient-based optimization of quantum circuits — designed to be intuitive even without a background in machine learning — soon, you’ll be able to run quantum experiments directly on our quantum photonics chip.

Simulation and physical experiments: all from the same piece of code.

What can I use it for?

Whatever you like! The sky is the limit.

Pushing the theoretical limits of quantum computation
Strawberry Fields is ideal for studying existing algorithms, or quickly prototyping new ideas and breakthroughs.

Designing and prototyping quantum photonics
Need to design a photonics experiment before committing to buying expensive components? Perhaps you’d like to optimize a photonics set-up, to make maximum use of the components you already have available.

Exploration and design of novel quantum circuits
On the other hand, do you know the output you need, but you’re not sure how exactly to get there? Exploit the built-in Tensorflow support and use deep learning to design and optimize circuits.

If you find yourself in a situation where you need additional features for your research, get in touch with us — Strawberry Fields is still under heavy development, and we are always open to hearing how we can make it a more integral part of your research workflow.

Okay, you’ve convinced me. How do I start?

To see Strawberry Fields in action immediately, try out our Strawberry Fields Interactive web application. Prepare your initial states, drag and drop gates, and watch your simulation run in real time right in your web browser.

To take full advantage of Strawberry Fields, however, you’ll want to use the Python library. The best place to start is our documentation — we have put together an extensive selection of pages discussing continuous-variable quantum theory, quantum algorithms, and of course installation instructions and details of the Strawberry Fields API. This is supplemented by an array of tutorials; starting from the introductory (a basic guide to quantum teleportation) to the more advanced (machine learning and gradient-based optimization of quantum circuits).

You can also check out the source code directly on GitHub — the issue tracker is a great place to leave any feedback or bug reports. Alternatively, if you’d like to contribute directly, simply fork the repository and make a detailed pull request.

For more technical details regarding the Strawberry Fields architecture, be sure to read our whitepaper.

***

It is difficult to overstate just how excited we are. Strawberry Fields is the accumulation of months of hard work, and gives us the chance to share our progress with the quantum computing community.

But this is just the start — we have a ton of exciting projects in the pipeline. Watch this space.

March 28, 2018

Using quantum machine learning to analyze data in infinite-dimensional spaces

By Maria Schuld

The latest Xanadu research paper proposes a novel perspective on quantum machine learning that sounds crazy at first sight. The core idea is to use the Hilbert space of a quantum system to analyze data. The Hilbert space is the place where the states that describe a quantum systems live, and it is a very large place indeed. For a 50-qubit quantum computer, we are talking about a 1,125,899,907,000,000-dimensional space, and for a single mode of a continuous-variable quantum computer, the Hilbert space has an infinite number of dimensions. So how can we analyze data in such a Hilbert space if we have no chance to ever visit it, let alone to perform computations in it?

Kernel methods implicitly embed data into a higher dimensional feature space, where we can hope that it gets easier to analyze.

In fact, machine learning practitioners have been doing this kind of thing for decades when using the beautiful mathematical theory of kernel methods [1]. Kernels are functions that compute a distance measure between two data points, for example between two images or text documents. We can build machine learning models from kernels, the most famous being the support vector machine or Gaussian processes. It turns out that every kernel is related to a large — and sometimes infinite-dimensional — feature space. Computing the distance measure of two data points is equivalent to embedding these data points into the feature space and computing the inner product of the embedded vectors. In a sense, this is the opposite to neural networks, where we compress the data to extract a few features. Here, we effectively ‘blow up’ the data to make it potentially easier to analyze.

Mapping inputs into a large space and computing inner products is something that quantum computers can do rather easily. And any device that can encode a data point into a quantum state (which is really almost any quantum device), and which can estimate the overlap of two quantum states, can compute a kernel. Kernel methods are therefore a strikingly elegant approach to quantum machine learning. What is more, if the data encoding strategy is complex enough, we might even find cases where no classical computer could ever compute that same kernel. If we can show that our “quantum kernel” is useful for learning, we have a recipe for a quantum-assisted machine learning algorithm that is impossible to do classically: use the quantum device as a special-purpose estimator for kernel functions, and feed these estimates into a classical computer where a kernel method is trained and used for predictions. Voila!

A quantum-assisted support vector machine finds useful decision boundaries for small datasets. The kernel of the support vector machine is the inner product of 2-mode squeezed states, where the phase of the squeezing depends on the input data.

But the story does not end there. Quantum computing can actually be used to analyze data directly in feature space, without relying on the convenient detour via kernels. This idea has been successfully used for quantum-inspiredmachine learning with tensor networks (check out this great paper [2] and its successors), and now we want real quantum systems to do the job. For this, we use a variational circuit to define a linear model in Hilbert space.

To explain this in more detail, consider as an example the binary classification problem of the figure above, where we have to draw a line — a decision boundary — between two classes of data. We can encode data points x into a quantum state |ϕ(x)>, which effectively maps it to a vector in Hilbert space. In a continuous-variable system, this vector is an infinite dimensional Fock state. A unitary transformation W applied to the quantum state is nothing else than a linear model with regards to that vector. With a bit of post-processing, defines a linear decision boundary, or hyperplane, to separate the data in Hilbert space. From support vector machines, we know that a linear model is very well suited to analyze data in a feature space.

We can make the circuit depend on a set of parameters, W=W(θ) and train it to find the best linear decision boundary. These variational circuits have recently become a booming area of research in quantum machine learning [3,4,5]. With the theory of kernel methods, the approach of training circuits is enriched by a theoretical interpretation that can be used to guide our attempts of building powerful classifiers.

A quantum circuit (top) and its graphical representation as a neural network (bottom). Encoding a data point into optical modes maps it to an infinite-dimensional vector which can be interpreted as the hidden layer of a neural network. A variational quantum circuit together with measurements can then be used to extract two outputs from this layer, which are further processed to a binary prediction.

To summarize, using the Hilbert space of a quantum system for data analysis gives us a theoretical framework that can guide the development of quantum machine learning algorithms. It defines a potential road to show so-called “quantum supremacy’’ for real-life applications. Whether we can find cases in which this approach leads to useful classifiers is an exciting open question.

[1] B. Schoelkopf and A. Smola, Learning with Kernels, MIT Press, Cambridge, MA (2002).

[2] M. Stoudenmire and D. Schwab, Advances In Neural
Information Processing Systems, pp. 4799–4807 (2016).

[3] G. Verdon, M. Broughton, and J. Biamonte, arXiv:1712.05304 (2017).

[4] E. Farhi and H. Neven, arXiv:1802.06002 (2018).

[5] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, arXiv:1803.00745 (2018).

February 8, 2018

Making a Neural Network, Quantum

By Tom Bromley

Hello world, we are in Xanadu.

We work to manufacture the world’s first all on-chip photonic quantum processor, using cutting edge techniques to harness powerful properties of light. The purpose of this blog is to keep you updated with our progress. From exciting new findings to testing challenges, and everything in between, we will keep you in tune with the latest in the world of quantum tech.

Quantum machine learning is one of the primary focuses at Xanadu. Our machine learning team is strengthening the connections between artificial intelligence and quantum technology. In this blog post we discuss how a neural network can be made quantum, potentially giving huge increases in operating speed and network capacity. This post will require no prior scientific or mathematical background, even if you’ve never heard of a neural network – read on! For more details, a paper explaining these findings is available here.

Neural networks

You have probably benefited from machine learning today. And yesterday. As well as the day before. Machine learning is becoming increasingly embedded in our daily routine. If you have checked a social media account, performed an online search, or even commuted to work, a distant remote server may have shaped your experience using a wide range of learning algorithms. The objective of machine learning is to give computers the power to make predictions and generalizations from data without explicitly telling them how to do so. It is an extremely exciting and fastly evolving area; take a look at a beginner’s introduction.

A very successful approach in machine learning is to design an artificial neural network, which is inspired by the structure of neurons in the brain. Imagine a collection of points, that can each be in one of two states: “on” or “off”. These points are interconnected with wires of variable strength, as shown in the diagram below. The network is operated by allowing each neuron to decide its state based upon the states of the neurons connected to it, also bearing in mind the strength of the connections. One of the advantages of neural networks is the ability to choose the structure based upon the problem; see here to appreciate the neural network zoo! Neural networks have been used for a variety of applications, including voice recognition and cancer detection.

Quantum neural networks

So where can quantum technology help? At Xanadu, we have been looking at how to embed a type of neural network into a quantum system. Our first step is to use a property called quantum coherence, where a system can concurrently exist in a combination of states – in what we call a coherent superposition. The trick is then to associate each neuron with a state of the system: if the neuron is “on” then its corresponding state appears with a positive sign in the superposition, while if the neuron is “off” then there is a negative sign in the superposition. We have focused on systems of multiple quantum bits (qubits), each of which can either be “up” or “down”. By looking at all of the combinations of “up” and “down” possible in our collection of qubits, you can see that an exponential number of neuron configurations can be stored within a small number of qubits. For example, the diagram below shows that we can store any configuration of 4 neurons in only 2 qubits!

By using this way of embedding neurons within qubits, as well as accessing an increased storage capacity we unlock access to a huge number of quantum algorithms that can help us to speed up processing the network. The first question here is to choose the structure of the neural network, so that we can know which quantum algorithm is best suited to give us a performance advantage. This post focuses on the Hopfield network, which is a structure where all of the neurons are connected to each other with variable weights (forming a complete graph). The Hopfield network can be used as a content addressable memory system: configurations of the neurons are associated to patterns (for example, images), which are stored by altering the weights of the connections between neurons. This is known as Hebbian learning. New patterns can then be loaded into the Hopfield network, which is processed with the objective of recovering the most similar pattern stored in memory.

The conventional way of operating the Hopfield network is to keep picking neurons at random and updating them by considering the connected neurons, along with their weights. One of our insights is to realize that the Hopfield network can instead be run in a single step by inverting a matrix containing information on the weights between all neurons. Then, using the embedding into qubits discussed above, we can turn to the famous quantum HHL algorithm to process the Hopfield network. The HHL algorithm can invert a matrix in an exponentially fast time when compared to the best algorithms running on standard computers. However, to exploit the HHL algorithm we need to be able to do something called Hamiltonian simulation of our matrix.

The Hamiltonian of a quantum system governs how it naturally evolves in time. Hamiltonian simulation is therefore the art of making a quantum system evolve in a controlled way so that its evolution is as close as possible to the given Hamiltonian. One of the novel techniques that we have developed is a method of Hamiltonian simulation for the Hopfield network matrix. This is achieved by repetitively “partial swapping” in batches of the memory patterns to be stored. By “partial swapping,” we mean that our qubits are partly swapped with another bank of qubits holding sequences of the memory patterns. This construct can be thought of as the quantum analogue of Hebbian learning (qHeb), and we will be releasing a paper with more details shortly. A diagram summarizing our quantum approach for the Hopfield network is given below. We call our quantum routine qHop, which uses the quantum subroutine qHeb.

So, how can this help?

Encoding a neural network within qubits gives an exponential advantage in storage capacity, while the algorithms qHop and qHeb team up to give an exponential increase in processing speed. This means that we expect to run larger neural networks faster on a quantum processor than we could do using a standard computer. The Hopfield network itself has an application as a pattern recognition system, as well for solving the travelling salesman problem; read this book for a very clear explanation.

We have highlighted in particular the application of the Hopfield network within genetics as a recognizer of infectious diseases. Imagine that an outbreak of flu has occurred and scientists have partially sequenced the genetic code of the virus. Their goal is to match the genetic sequence with one of the known strains of flu, such as H1N1 or H5N1. By loading the partial sequence into the Hopfield network, which has already stored all the known strains within the neuron connection weightings, the scientists can work out which strain of flu has caused the outbreak. In the image below, we show how the genetic data in terms of the RNA base pairs A, C, G, and U can be stored in neurons of the network. The plot shows a comparison between simulated results of operating the Hopfield network using the conventional approach and our new matrix inversion based approach. Running this algorithm on a quantum processor will also give improvements in storage capacity and operating speed.

What’s next?

We are very excited to uncover improvements to the Hopfield network through quantum mechanics. Yet, there is still more work to be done! The question of how to quickly read in and out data from our quantum device still needs to be addressed.

At the same time, the experimental team here at Xanadu has been working on innovative chip designs and implementations of photonic quantum processors. One of our main objectives is to combine new insights in quantum machine learning with real-world photonic quantum processors. We hope to use the power of laser light within our chip, which can go far beyond the power of even qubits, to give a disruptive impact into machine learning.

Stay posted for more breakthroughs!

Xanadu HQ

Xanadu