By Nathan Killoran, Josh Izaac, Juan Miguel Arrazola, and Thomas R. Bromley
At Xanadu we are developing a photonic quantum computer: a device that processes information stored in quantum states of light. We are very excited by the possibilities that this approach brings. Photonic quantum computers naturally use continuous degrees of freedom — like the amplitude and phase of light — to encode information. This continuous, or analog, structure makes photonic devices a natural platform for quantum versions of neural networks.
How do we mimic a neural network using a photonic system? And where does quantum enter the game?
This summer, we released an exciting new paper which resolves these questions. We propose a photonic circuit which consists of a sequence of repeating building blocks, or layers. Layers can be composed, with the output of one layer serving as the input to the next. These photonic layers are akin to the layers which appear in classical neural networks.
Classical nets take an input x, multiply it by a weight matrix W, add a bias b, and pass the result through a nonlinear function (such as tanh or ReLU):
Our quantum layer mimics this functionality using photonic quantum gates: interferometers (made from phase shifters and beamsplitters), squeezing and displacement gates, and a fixed nonlinear transformation. These are the same gates that are used to build a photonic quantum computer, so our quantum neural network architecture has all the power of quantum computers.
The basic layer unit of a photonic quantum neural network. Gates are coloured to indicate which classical component they are related to.
The quantum neural network retains strong ties to classical neural networks. In fact, the quantum version can be used to run the classical version, by using the quantum net in a way which does not generate any quantum weirdness (superposition, entanglement, etc.). The similarity is illustrated through the colouring of gates in the two images above. The interferometers and squeezing gates are connected to the weight matrix, the displacement gates with the bias, and the quantum nonlinearity with the classical nonlinearity.
We trained the quantum neural network to do several tasks: curve fitting, fraud detection, a classical-quantum autoencoder, and generating images. We will highlight some of these cool applications in a future blog post. In the meantime, we’d like to share one particular and powerful use of photonic quantum neural networks.
Learning quantum states and quantum gates
While it is easy to simulate arbitrary quantum states of light using the Strawberry Fields software package, sometimes we forget how much of a challenge generating them can be for quantum computing researchers — who might spend days rearranging an equation to only end up back where they started. As reported in another recent research paper, the quantum neural network architecture we have pioneered can help in these cases. Using training methods from machine learning, we can optimize a quantum neural network circuit to produce arbitrary quantum states. Once we have learned the correct parameters, this state-preparation subroutine can then be reused within other quantum circuits or algorithms.
A quantum computing expert might takes weeks or months to craft such a circuit, while the machine learning approach can find solutions on a timescale of hours. This can dramatically accelerate the research and development process, potentially leading to breakthroughs in quantum computing.
For example, one tricky problem in quantum experiments is the creation of single-photon states — currently done via a random process known as spontaneous parametric down-conversion. We trained a quantum neural network which can produce on-demand single photon states using a fixed set of quantum gates. Quantum states of light can be represented by smooth landscapes called Wigner functions, so we can visualize the state output by our quantum neural network during training with a 3D animation:
The output of the quantum neural network at different stages of training. In these plots, we always begin from a fixed starting state (shown by the initial peak, called a Gaussian). We then gradually learn to output a single photon state (shown here by the final shape with a red trough at the center).
We can even do this with more complicated states, such as the so-called Schrödinger cat states, or the ON state — an important state in photonic quantum computing, used to construct various quantum gates.
Going one step further, our paper shows how to implement quantum gates using the quantum neural network architecture. Gates are another important ingredient in quantum computation, giving us a tool to control how quantum systems evolve. All quantum algorithms, including the famous Shor’s algorithm, require gates to function, yet it can be hard to work out how to implement them physically. Our approach automates this procedure.
To visualize this process, we can use the fact that gates in quantum computing are unitary matrices, with complex-valued entries. We can thus depict a gate graphically by colouring the real and imaginary entries of the corresponding matrix according to their magnitude.
A random four-dimensional gate U, depicted using the real and imaginary parts of the corresponding unitary matrix.
The transformation performed by our quantum neural network archicture can also be represented with this simple colouring scheme. The image below demonstrates how the quantum neural network transformation — initially random — is progressively trained to match a specific target gate.
Each frame in the above animation corresponds to the transformation carried out by a quantum neural network at a particular step in training. At the end of training, the transformation closely matches the desired one given above by U.
All the code for our state and gate learning is available on GitHub, and uses Python, the machine learning framework TensorFlow, and our quantum simulation software Strawberry Fields. Download it and see what states you can generate!
Introducing OpenFermion support and the Quantum Machine Learning Toolbox (QMLT)
Since our last update, we have been hard at work improving Strawberry Fields, our photonics-based quantum software platform; this includes new features for decomposing optical circuits into the continuous-variable gate set. It has also been great to see the burgeoning community growing around Strawberry Fields, creating content ranging from tutorials that help us understand Bell correlations in continuous-variable (CV) systems to quantum battleship games. We are also incorporating feedback we have received from users — come say hi on our Slack channel if you haven’t already!
Behind the scenes, Strawberry Fields is an integral part of our research workflow. Our latest paper, Continuous-variable quantum neural networks, uses Strawberry Fields to demonstrate a new architecture for quantum neural networks — including a neat example where a quantum neural network is trained to generate Tetris blocks, or “Tetrominos”.
The potential applications of quantum computing are huge, and our goal with Strawberry Fields is to make them as accessible as possible — whether you are a quantum physicist, chemist, machine learning scientist, or just having a bit of fun. To that end, we are delighted to introduce two new applications that build on the Strawberry Fields platform: SFOpenBoson and the Quantum Machine Learning Toolbox (QMLT).
OpenFermion and SFOpenBoson
The quantum simulation of photons and other bosons is a natural fit for Strawberry Fields and the photonic hardware we are developing at Xanadu. We are thrilled to announce that this is now even more accessible — we have joined forces with the Google Quantum A.I. research team to introduce bosonic systems to OpenFermion, the collaborative open-source chemistry package for quantum computers.
Not only that, but bosonic systems constructed in OpenFermion can be simulated in Strawberry Fields via our new SFOpenBoson plugin — no prior knowledge of quantum circuits or decompositions required! We handle that for you behind the scenes, and allow you to view which quantum gates were applied.
For example, quantum simulation of the Bose-Hubbard model can be done in as little as 6 lines of code:
OpenFermion is the definitive quantum chemistry library for quantum computation, and we are excited to be part of a collaboration that includes companies on the forefront of quantum computing, such as Google, D-Wave, and Rigetti.
“Many important physical phenomena in electronic structure arise due to interactions between bosons (e.g., photons, phonons) and fermions (e.g., electrons).” says Ryan Babbush, the lead researcher of OpenFermion at Google Quantum A.I.
“The introduction of tools for representing bosonic systems adds important new functionality to OpenFermion, and meaningfully extends the scope of the library.”
Have a read of the Strawberry Fields section in the OpenFermion paper, and check out our SFOpenBoson documentation and tutorials to see how you can use OpenFermion in conjunction with Strawberry Fields.
Quantum Machine Learning Toolbox
Quantum machine learning is a rapidly advancing area, with applications stretching across multiple disciplines. We believe everyone — no matter your machine learning prowess — can take advantage of this functionality in Strawberry Fields. To help lessen the learning curve, we are delighted to introduce the Quantum Machine Learning Toolbox (QMLT) — a Strawberry Fields application that enhances the core machine learning functionality with useful tools, functions, and abilities.
The toolbox supports a number of things that make your life easier:
Easily set up optimization, supervised, and unsupervised learning tasks
Run and score trained circuits, predict new inputs, and compute the accuracy on a training set
Use different optimizers, including numerical andautomatic methods
Visualize and log the cost function and parameters during training (see image)
Do a warm start with pretrained models.
The QMLT integrates with Strawberry Fields and quantum circuits, making complicated machine learning exercises simple to define and run. The built-in numerical learner even opens up all three SF simulator backends for machine learning, with built-in live plots so you can track your optimization progress in real time.
The Quantum Machine Learning Toolbox is available right now at our GitHubrepository, with online documentation available here. Check out the docs for examples covering optimization, supervised, and unsupervised learning. You can get started by reading our introduction to quantum variational circuits, then have a go working through some of the curated machine learning and optimization tutorials.
We hope you enjoy using these new tools and applications; if you do any cool projects or research, reach out to us and we’ll post them in the Strawberry Fields gallery. We have more exciting things in the works for Strawberry Fields — stay tuned!
Quantum machine learning is a new buzzword in quantum computing. This emerging field asks — amongst other things — how we can use quantum computers for intelligent data analysis. At Xanadu we are very excited about quantum machine learning and spend a fair amount of time thinking about it. Here is why.
First of all, it is important to note that quantum machine learning is very young, meaning that it is not yet clear what results, and commercial applications, to expect from it. This was demonstrated at the “Quantum meets Industry” panel at the quantum machine learning conference in Bilbao, Spain. When asked whether the time is ripe for commercial investments into quantum machine learning, the experts from companies such as IBM, Microsoft and NASA, were noticeably careful with their answers. Still, almost every company involved in quantum computing today, including the representatives in the panel, has a machine learning group.
If even the ‘big players’ are struggling to make definite statements about the — let’s say 5-year — outlook on using quantum computers for machine learning tasks, should quantum computing startups like Xanadu get on board? We think the answer is yes and want to put three arguments forward:
Early-generation quantum devices are promising newcomers to the growing collection of AI accelerators, thereby enabling machine learning.
Quantum machine learning can lead to the discovery of new models and thereby innovate machine learning.
Machine learning, and quantum machine learning in particular, will increasingly permeate all aspects of quantum computing, redefining the way we think about quantum computing.
Let us go through these points one by one.
1. Enable machine learning
Early-generation quantum devices vary in their programming models, their generality, the quantum advantage they promise, and the hardware platforms that they run on. Across the board, they are very different from the universal processors that researchers envisioned when the field started in the 1990s. For machine learning, this may be a feature rather than a bug.
Quantum devices as special-purpose AI accelerators
Many current quantum technologies resemble special-purpose hardware like Application-Specific Integrated Circuits (ASICs), rather than a general-purpose CPU. They are hardwired to implement a limited class of quantum algorithms. More advanced quantum devices can be programmed to run simple quantum circuits, which makes them more similar to Field-Programmable Gate Arrays (FPGAs), integrated circuits that are programmed using a low-level, hardware-specific Hardware Description Language. In both cases, an intimate knowledge of the hardware design and limitations is needed to run effective algorithms.
ASICs and FPGAs find growing use in machine learning and artificial intelligence, where their slim architectures reduce the overhead of a central processor and naturally suit the task they specialize in. If current quantum technologies resemble this classical special-purpose hardware, they could find applications in machine learning in a similar fashion. And this even without universal quantum computing and exponential quantum speedups.
“Quantum technologies may eventually have a place in the mix of AI hardware as we develop newer and newer techniques to advance towards artificial general intelligence.”
Taking a look at the most advanced AI solutions reveals that they already use a blend of technologies. More and more computation is done on special-purpose devices located at the edge, where technology interacts with its environment (think of fingerprint recognition for unlocking a phone or smile detection in a camera). At the other end of the spectrum, calculations are done on GPU clusters (for instance, traffic routing or tagging photos). As a matter of fact, a modern GPU is already a technology blend in itself: the latest Volta chips by Nvidia include low-precision ASICs called Tensor Cores, designed specifically to accelerate the training of neural networks. Google follows a similar path with their Tensor Processing Units (TPUs) that are designed to support the TensorFlow machine learning framework. In short, AI has already embraced heterogeneity. Quantum technologies may eventually have a place in the mix of AI hardware. And this mix has to be as strong as possible if we want to advance towards artificial general intelligence.
Finally, hardware can significantly shape the advancement of software. In the 2010s, the use of GPUs contributed to the renaissance of neural network models (that have been around for decades but were largely discarded as untrainable). Similarly, accelerating quantum technologies could make their very own contribution to lifting specific machine learning methods into the realm of the doable, or even of the cutting-edge. This is particularly true for methods that are considered too hard to train with classical hardware and which were superseded by more convenient competitors.
What quantum computers are good at
If early-generation quantum devices can be thought of as special-purpose AI accelerators, what exactly can quantum computers contribute to machine learning and AI? Why would we want to use “quantum ASICs”? Let’s look at a selection of exciting candidate tasks, namely optimization, linear algebra, sampling, and kernel evaluations.
Optimization. Just like in machine learning, optimization is a prominent task in quantum physics. Physicists (and quantum chemists) are typically interested in finding the point of lowest energy in a high-dimensional energy landscape. This is the basic paradigm of adiabatic quantum computing and quantum annealing. To no surprise, one of the first tasks for quantum computers investigated in the context of machine learning was optimization. The D-Wave quantum annealer, a special-purpose device that can solve so-called quadratic unconstrained binary optimization problems, was used as early as 2008 to solve classification tasks. More recently, the hybrid quantum-classical technique of variational circuitshas been proposed. There, a quantum device is used to evaluate a hard-to-compute cost function, while a classical device performs an optimization based on this information.
Linear Algebra. When speaking about potential exponential quantum speedups for machine learning, people usually refer to the inherent ability of quantum computers for executing linear algebra computations. There are many subtleties to this claim, and its prospect with regards to hardware in the near term is not always clear. One of the bottlenecks is data encoding: to use a quantum computer as a kind of super-fast linear algebra enabler for large matrix multiplications and eigendecompositions (not unlike TPUs), we have to first “load” the large matrix onto the quantum device, a procedure that is highly non-trivial.
However, there may be near-term benefits in understanding quantum computers as fast linear algebra processing units. Mathematically speaking, a quantum gate executes a multiplication of an exponentially — or even infinitely — large matrix with a similarly large vector. Specific costly linear algebra computations — namely those corresponding to quantum gates — can be therefore be done in a single operation on a quantum computer. This perspective is leveraged when building machine learning models out of quantum algorithms, for example when we think of an quantum gate as a (highly structured) linear layer of an enormous neural network.
Sampling. All quantum computers can be understood as samplers that prepare a special class of distributions (quantum states) and that sample from these distributions via measurements. A very promising avenue is therefore to explore how samples from quantum devices can be used to train machine learning models. This has been investigated for Boltzmann machines and Markov logic networks, where the so called Gibbs distribution — which is inspired by physics and hence comparably easy to realize with a physical system — plays an important role.
“We should think of early-generation quantum computers as small, partially programmable special-purpose devices that can take over costly jobs for machine learning which naturally suit them.”
Kernel evaluation. One very recent idea from Xanadu illustrates how there are more specific tasks in machine learning that could be taken over by quantum devices. Kernel methods use machine learning models based on a distance measure between data points which is called a kernel. Quantum devices can be used to estimate certain kernels, including ones that are difficult to compute classically. The estimates from the quantum computer can be fed into a standard kernel method — such as a support vector machine. Inference and training are done purely classically, but augmented with the quantum special purpose device.
In summary, we should think of early-generation quantum computers as small, partially programmable special-purpose devices that can accelerate certain tasks in machine learning, just like the way GPUs enabled deep learning.
2. Innovate machine learning
Besides enabling pre-existing machine learning techniques, quantum machine learning potentially has a lot more to offer. Recently, a growing number of physicists trained in the methods of quantum theory and quantum computing have begun to think about machine learning. Having physicists enter machine learning has proven fruitful in the past — just think of the physicist John Hopfield who introduced his intimate knowledge of the Ising model into machine learning and created what is now known as associative memory in Hopfield networks. Quantum computing can lead to entirely new machine learning models. These new models are tailor-made for quantum devices, and may turn out to be something that works well, but which the machine learning community has simply never thought up. Let us demonstrate this with two examples that are actively investigated in quantum machine learning.
Sampling from quantum distributions
It was mentioned above that quantum devices are good at sampling. For example, quantum annealers can be used to approximately sample from a Gibbs distribution to train Boltzmann machines. But this is not straightforward, since the quantum device actually prepares a quantum Gibbs distribution. Instead of trying to “make things classical”, researchers investigated what happens if we use the natural quantum distribution. It turns out that in some cases, “quantum samples” can be very useful for training, as shown in the figure on the left.
“Discovering new machine learning models is similar to searching for gold on a yet unknown island. In the case of quantum machine learning, we have found some promising signs of gold at the first beach, which is why we are building better expedition gear and venturing further — excited by what we might find.”
Variational quantum circuits
As a second example, consider a programmable quantum device — where “programmable” refers to some device parameters that can be tuned to change the specifications of an otherwise fixed computation. We set some of these parameters to the values of input data x, and associate other parameters as trainable variables θ. The device ultimately gives us some outputs y = f(x, θ) that depend on inputs and variables. Such a quantum device (and this description is really very generic) implements a supervised learning model. This model is sometimes called a variational classifier, relating it to the concept of variational (i.e., trainable) quantum circuits. In a similar way we can construct unsupervised models.
The function f that the quantum device computes can be very specific to its hardware architecture, how parameters enter the computation, and how one relates variables, inputs, and outputs to the quantum algorithm. However, altogether we get a “quantum model”. Importantly, if we do not know how to simulate the quantum model with a classical computer, we have not only a new ansatz to do machine learning, but also one that can only be executed with a quantum device. The emerging literature on variational circuits shows howtotrain such “hardware-derived” models with classical computers, and groups around the world are currently busy investigating the power and limits of such quantum models.
Discovering new machine learning models is similar to searching for gold on a yet unknown island. In the case of quantum machine learning, we have found some promising signs of gold at the first beach, which is why we are building better expedition gear and venturing further — excited by what we might find.
3. Redefine the way we think about quantum computing
There is a third, more “behind-the-scenes” reason why we think that quantum machine learning is essential. Quantum machine learning, and its central subtask, optimization, are not only subfields of quantum computing, they are increasingly becoming approaches to quantum computing itself. As such, they have the potential to redefine the way we think about quantum computing. This holds for software design, hardware development and applications that rely on quantum computing.
Quantum software design
So far, quantum algorithms are carefully composed by people who have a deep knowledge of the tricks of the trade. And even the “bible” of quantum computing, the textbook of Michael Nielsen and Isaac Chuang, remarks that “coming up with good quantum algorithms seems to be a difficult problem”. But quantum algorithms could also be learned.
Consider for example the preparation of resource states. Resource states feature widely, i.e., in continuous-variable applications or error correction with magic states, where the computation relies on a specific state to be prepared as an input. Oftentimes, the algorithm to prepare the initial state is unknown. However, given a device with a certain type of gates, we can let the computer “learn” a gate sequence that prepares the desired state on the specific hardware. Likewise, entire quantum experimental setups (i.e., to generate highly entangled quantum systems) have been designed by machine learning approaches.
“The ideas of machine learning can transform the way we do quantum algorithmic design: Instead of composing algorithms, we can let the device learn them.”
Quantum hardware development
Intimate knowledge of machine learning can also help to build a quantum computer. Building a quantum computer will generate large amounts of labeled and unlabeled data. For example, data is generated when reading out the final quantum state of the device, when assessing the performance of gates, or when it comes to estimating measurement results. Quantum machine learning has a very active subfield, in which classical machine learning is used as a method to make sense of data produced by quantum experiments in the lab. Machine learning systems could easily become a standard component of quantum hardware.
“Machine learning may one day be a standard technique to read out the results of a computation with a quantum device.”
Quantum machine learning techniques are also closely tied with a variety of application areas. Quantum chemistry, for instance, is likewise interested in minimizing high-dimensional and difficult cost functions, e.g., to find the lowest energy configurations of molecules for drug discovery or material science. Quantum computers can be used to tackle these problems, with methods like the variational quantum eigensolvers mentioned above (see for example this recent result which scaled to a water molecule on an extremely noisy quantum computer).
Since machine learning and quantum chemistry are both heavily based on optimization, it is not surprising that they can both leverage similar quantum algorithms. A variational quantum eigensolver is, in essence, the same algorithm as a variational classifier, which was introduced as an innovative way to use quantum computers for machine learning. Understanding gained from the machine learning side will translate to new insights on the chemistry side. Good quantum machine learning algorithms will therefore have immediate consequences for other quantum applications based on data and optimization.
In summary, the potential of quantum machine learning to enable and innovate future AI applications, as well as to contribute towards the development of the field of quantum computing itself are three reasons why quantum machine learning may have a big future when it comes to small-scale quantum devices.
This opinion piece is signed by: Maria Schuld, Nathan Killoran, Thomas Bromley, Christian Weedbrook, Peter Wittek
Billions of dollars are devoted to designing, producing, and refining materials and molecules for applications in many sectors of the world economy . With the vast amount of data now available, can we automate the discovery of new materials with tailored physical and chemical properties?
In principle, yes!
One promising solution is to use generative adversarial networks . As shown in the figure, the idea is simple: train a neural network — called the generator — to output candidate materials with certain desired properties. To give the network some flexibility, we also provide some unstructured, randomly chosen inputs. Its task is to convert these unstructured random inputs into a selection of new materials with the target properties. Of course, the generator does not physically create the materials, but rather simulates them. In addition to proposing materials, the generator can also provide the physical protocol for fabricating them.
A second neural network — called the discriminator — is tasked with training the generator. The discriminator judges whether a given example comes from real experimental data or from the generator. If the data comes from a real source, the discriminator outputs ‘real’ and if the data comes from the generator, it outputs ‘fake’.
Illustration of the workflow used to train a quantum generative adversarial network with the goal, in this example, of creating new materials for solar cells.
Illustration of the workflow used to train a quantum generative adversarial network with the goal, in this example, of creating new materials for solar cells.
The generator neural network is trained in an adversarial game with the discriminator. The generator must fool the discriminator into wrongly classifying its fake outputs as ‘real’. At the beginning of the training, it is easy to imagine that the generator only outputs random information and the discriminator randomly classifies its input as ‘real’ or ‘fake’. However, the discriminator can improve its performance since the real data — which can be highly structured or even human-readable text — is highly distinguishable from the strings of random bits at the output of the untrained generator. As the discriminator starts to distinguish between random noise and structured data, the generator can follow the gradient of the discriminator, learning to generate data which is more likely to fool it. At this point, the discriminator has to discover other features to accomplish its task and the generator continues the adversarial game by improving its ability to generate those features. In theory, at the end of the game, the generator will produce materials which closely resemble the data used for training. By changing the input parameters, the generator can be used to create completely novel materials, as well as detailing their fabrication process!
This whole idea could work in principle, but in practice we would run into a very fundamental problem. The properties of the materials and molecules in our universe are determined by their microscopic constituents, which obey the laws of quantum mechanics. Ultimately, the stability of matter itself is a direct consequence of the fact that the physical properties of fundamental particles like electrons and nuclei are not expressed with probabilities, but with complex probability amplitudes. The interference of electronic amplitudes around nuclei is the mechanism which ensures that electrons do not collapse onto nuclei, countering the electrostatic attraction. The properties of the interference pattern determine, to various degrees, the large scale properties of a given material: whether it is an insulator, a conductor, a semiconductor, a superconductor, any kind of magnets. In fact, all properties of all matter, from the smallest molecules all the way up to neutron stars, are determined by interference! Nevertheless, accurately computing the interference patterns of complex probability amplitudes is difficult. In fact, it is so difficult that we must double the size of a computer each time we want to add a new quantum particle to a simulation. Therefore, it would be extremely difficult for a generative adversarial network coded on a classical computer to synthesize fundamentally quantum phenomena and generate revolutionary new molecules.
This difficulty can be turned into a powerful lever if we instead try to use the quantum behavior of the building blocks of matter to execute quantum calculations . This is precisely why we are building a quantum computer at Xanadu. Universal quantum computers can simulate all physical phenomena in a way which is exponentially more efficient than current-day classical computers. By using universal quantum circuits, it is possible to extend classical adversarial networks to the quantum domain and to unlock the full power of quantum computers. In two new papers [4,5] with my colleagues Nathan Killoran, Seth Lloyd, Christian Weedbrook and I, we define these ideas for the first time: Quantum generative adversarial networks and quantum adversarial learning!
This kind of algorithm, being fundamentally quantum in nature, has the potential to be exponentially more efficient than its classical counterpart at representing and generating highly correlated data such as for our example in materials design. We can also imagine a future where more complicated optimization tasks would include minimization of production and logistical costs, as well as macro-economical quantities such as market supplies and the price of various financial derivatives. We expect that quantum machine learning will also be leveraged to improve our ability to understand the training of these networks.
Software and simulations: Underpinning the work of our algorithms team is the ability to easily simulate our quantum photonics system. This is a hugely important component, allowing us to quickly flesh out ideas, and discover and probe interesting and unexpected behaviour.
As part of these efforts, we have developed a full stack software solution for simulating quantum photonics and continuous variable quantum computing. And that’s not all — we have also integrated support for Tensorflow, creating a framework that combines the latest advances in deep learning and machine learning with quantum computation.
The best part? Our framework is now open-sourced and available for anyone to use and play with.
Introducing Strawberry Fields and Blackbird.
An open-source full-stack quantum software platform for photonic quantum computing.
Implemented in Python for ease-of-use, Strawberry Fields is specifically targeted to continuous-variable quantum computation. Quantum circuits are written using the easy-to-use and intuitive Blackbird quantum programming language.
Powers the Strawberry Fields Interactive web app, which allows anyone to run a quantum computing simulation via drag and drop. Quantum computing has never been simpler.
Includes a suite of quantum simulators implemented using NumPy and Tensorflow — these convert and optimize Blackbird code for classical simulation.
Future releases will target experimental backends, including photonic quantum computing chips.
These last two features are the most thrilling to us here at Xanadu. Not only is Strawberry Fields the first quantum computing simulator to include gradient-based optimization of quantum circuits — designed to be intuitive even without a background in machine learning — soon, you’ll be able to run quantum experiments directly on our quantum photonics chip.
Simulation and physical experiments: all from the same piece of code.
What can I use it for?
Whatever you like! The sky is the limit.
Pushing the theoretical limits of quantum computation Strawberry Fields is ideal for studying existing algorithms, or quickly prototyping new ideas and breakthroughs.
Designing and prototyping quantum photonics Need to design a photonics experiment before committing to buying expensive components? Perhaps you’d like to optimize a photonics set-up, to make maximum use of the components you already have available.
Exploration and design of novel quantum circuits On the other hand, do you know the output you need, but you’re not sure how exactly to get there? Exploit the built-in Tensorflow support and use deep learning to design and optimize circuits.
If you find yourself in a situation where you need additional features for your research, get in touch with us — Strawberry Fields is still under heavy development, and we are always open to hearing how we can make it a more integral part of your research workflow.
Okay, you’ve convinced me. How do I start?
To see Strawberry Fields in action immediately, try out our Strawberry Fields Interactiveweb application. Prepare your initial states, drag and drop gates, and watch your simulation run in real time right in your web browser.
To take full advantage of Strawberry Fields, however, you’ll want to use the Python library. The best place to start is our documentation — we have put together an extensive selection of pages discussing continuous-variable quantum theory, quantum algorithms, and of course installation instructions and details of the Strawberry Fields API. This is supplemented by an array of tutorials; starting from the introductory (a basic guide to quantum teleportation) to the more advanced (machine learning and gradient-based optimization of quantum circuits).
You can also check out the source code directly on GitHub — the issue tracker is a great place to leave any feedback or bug reports. Alternatively, if you’d like to contribute directly, simply fork the repository and make a detailed pull request.
For more technical details regarding the Strawberry Fields architecture, be sure to read our whitepaper.
It is difficult to overstate just how excited we are. Strawberry Fields is the accumulation of months of hard work, and gives us the chance to share our progress with the quantum computing community.
But this is just the start — we have a ton of exciting projects in the pipeline. Watch this space.
The latest Xanadu research paper proposes a novel perspective on quantum machine learning that sounds crazy at first sight. The core idea is to use the Hilbert space of a quantum system to analyze data. The Hilbert space is the place where the states that describe a quantum systems live, and it is a very large place indeed. For a 50-qubit quantum computer, we are talking about a 1,125,899,907,000,000-dimensional space, and for a single mode of a continuous-variable quantum computer, the Hilbert space has an infinite number of dimensions. So how can we analyze data in such a Hilbert space if we have no chance to ever visit it, let alone to perform computations in it?
In fact, machine learning practitioners have been doing this kind of thing for decades when using the beautiful mathematical theory of kernel methods . Kernels are functions that compute a distance measure between two data points, for example between two images or text documents. We can build machine learning models from kernels, the most famous being the support vector machine or Gaussian processes. It turns out that every kernel is related to a large — and sometimes infinite-dimensional — feature space. Computing the distance measure of two data points is equivalent to embedding these data points into the feature space and computing the inner product of the embedded vectors. In a sense, this is the opposite to neural networks, where we compress the data to extract a few features. Here, we effectively ‘blow up’ the data to make it potentially easier to analyze.
Mapping inputs into a large space and computing inner products is something that quantum computers can do rather easily. And any device that can encode a data point into a quantum state (which is really almost any quantum device), and which can estimate the overlap of two quantum states, can compute a kernel. Kernel methods are therefore a strikingly elegant approach to quantum machine learning. What is more, if the data encoding strategy is complex enough, we might even find cases where no classical computer could ever compute that same kernel. If we can show that our “quantum kernel” is useful for learning, we have a recipe for a quantum-assisted machine learning algorithm that is impossible to do classically: use the quantum device as a special-purpose estimator for kernel functions, and feed these estimates into a classical computer where a kernel method is trained and used for predictions. Voila!
But the story does not end there. Quantum computing can actually be used to analyze data directly in feature space, without relying on the convenient detour via kernels. This idea has been successfully used for quantum-inspiredmachine learning with tensor networks (check out this great paper  and its successors), and now we want real quantum systems to do the job. For this, we use a variational circuit to define a linear model in Hilbert space.
To explain this in more detail, consider as an example the binary classification problem of the figure above, where we have to draw a line — a decision boundary — between two classes of data. We can encode data points x into a quantum state |ϕ(x)>, which effectively maps it to a vector in Hilbert space. In a continuous-variable system, this vector is an infinite dimensional Fock state. A unitary transformation W applied to the quantum state is nothing else than a linear model with regards to that vector. With a bit of post-processing, W defines a linear decision boundary, or hyperplane, to separate the data in Hilbert space. From support vector machines, we know that a linear model is very well suited to analyze data in a feature space.
We can make the circuit depend on a set of parameters, W=W(θ) and train it to find the best linear decision boundary. These variational circuits have recently become a booming area of research in quantum machine learning [3,4,5]. With the theory of kernel methods, the approach of training circuits is enriched by a theoretical interpretation that can be used to guide our attempts of building powerful classifiers.
To summarize, using the Hilbert space of a quantum system for data analysis gives us a theoretical framework that can guide the development of quantum machine learning algorithms. It defines a potential road to show so-called “quantum supremacy’’ for real-life applications. Whether we can find cases in which this approach leads to useful classifiers is an exciting open question.
 B. Schoelkopf and A. Smola, Learning with Kernels, MIT Press, Cambridge, MA (2002).
 M. Stoudenmire and D. Schwab, Advances In Neural
Information Processing Systems, pp. 4799–4807 (2016).
 G. Verdon, M. Broughton, and J. Biamonte, arXiv:1712.05304 (2017).
 E. Farhi and H. Neven, arXiv:1802.06002 (2018).
 K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, arXiv:1803.00745 (2018).
We work to manufacture the world’s first all on-chip photonic quantum processor, using cutting edge techniques to harness powerful properties of light. The purpose of this blog is to keep you updated with our progress. From exciting new findings to testing challenges, and everything in between, we will keep you in tune with the latest in the world of quantum tech.
Quantum machine learning is one of the primary focuses at Xanadu. Our machine learning team is strengthening the connections between artificial intelligence and quantum technology. In this blog post we discuss how a neural network can be made quantum, potentially giving huge increases in operating speed and network capacity. This post will require no prior scientific or mathematical background, even if you’ve never heard of a neural network – read on! For more details, a paper explaining these findings is available here.
You have probably benefited from machine learning today. And yesterday. As well as the day before. Machine learning is becoming increasingly embedded in our daily routine. If you have checked a social media account, performed an online search, or even commuted to work, a distant remote server may have shaped your experience using a wide range of learning algorithms. The objective of machine learning is to give computers the power to make predictions and generalizations from data without explicitly telling them how to do so. It is an extremely exciting and fastly evolving area; take a look at a beginner’s introduction.
A very successful approach in machine learning is to design an artificial neural network, which is inspired by the structure of neurons in the brain. Imagine a collection of points, that can each be in one of two states: “on” or “off”. These points are interconnected with wires of variable strength, as shown in the diagram below. The network is operated by allowing each neuron to decide its state based upon the states of the neurons connected to it, also bearing in mind the strength of the connections. One of the advantages of neural networks is the ability to choose the structure based upon the problem; see here to appreciate the neural network zoo! Neural networks have been used for a variety of applications, including voice recognition and cancer detection.
Quantum neural networks
So where can quantum technology help? At Xanadu, we have been looking at how to embed a type of neural network into a quantum system. Our first step is to use a property called quantum coherence, where a system can concurrently exist in a combination of states – in what we call a coherent superposition. The trick is then to associate each neuron with a state of the system: if the neuron is “on” then its corresponding state appears with a positive sign in the superposition, while if the neuron is “off” then there is a negative sign in the superposition. We have focused on systems of multiple quantum bits (qubits), each of which can either be “up” or “down”. By looking at all of the combinations of “up” and “down” possible in our collection of qubits, you can see that an exponential number of neuron configurations can be stored within a small number of qubits. For example, the diagram below shows that we can store any configuration of 4 neurons in only 2 qubits!
By using this way of embedding neurons within qubits, as well as accessing an increased storage capacity we unlock access to a huge number of quantum algorithms that can help us to speed up processing the network. The first question here is to choose the structure of the neural network, so that we can know which quantum algorithm is best suited to give us a performance advantage. This post focuses on the Hopfield network, which is a structure where all of the neurons are connected to each other with variable weights (forming a complete graph). The Hopfield network can be used as a content addressable memory system: configurations of the neurons are associated to patterns (for example, images), which are stored by altering the weights of the connections between neurons. This is known as Hebbian learning. New patterns can then be loaded into the Hopfield network, which is processed with the objective of recovering the most similar pattern stored in memory.
The conventional way of operating the Hopfield network is to keep picking neurons at random and updating them by considering the connected neurons, along with their weights. One of our insights is to realize that the Hopfield network can instead be run in a single step by inverting a matrix containing information on the weights between all neurons. Then, using the embedding into qubits discussed above, we can turn to the famous quantum HHL algorithm to process the Hopfield network. The HHL algorithm can invert a matrix in an exponentially fast time when compared to the best algorithms running on standard computers. However, to exploit the HHL algorithm we need to be able to do something called Hamiltonian simulation of our matrix.
The Hamiltonian of a quantum system governs how it naturally evolves in time. Hamiltonian simulation is therefore the art of making a quantum system evolve in a controlled way so that its evolution is as close as possible to the given Hamiltonian. One of the novel techniques that we have developed is a method of Hamiltonian simulation for the Hopfield network matrix. This is achieved by repetitively “partial swapping” in batches of the memory patterns to be stored. By “partial swapping,” we mean that our qubits are partly swapped with another bank of qubits holding sequences of the memory patterns. This construct can be thought of as the quantum analogue of Hebbian learning (qHeb), and we will be releasing a paper with more details shortly. A diagram summarizing our quantum approach for the Hopfield network is given below. We call our quantum routine qHop, which uses the quantum subroutine qHeb.
So, how can this help?
Encoding a neural network within qubits gives an exponential advantage in storage capacity, while the algorithms qHop and qHeb team up to give an exponential increase in processing speed. This means that we expect to run larger neural networks faster on a quantum processor than we could do using a standard computer. The Hopfield network itself has an application as a pattern recognition system, as well for solving the travelling salesman problem; read this book for a very clear explanation.
We have highlighted in particular the application of the Hopfield network within genetics as a recognizer of infectious diseases. Imagine that an outbreak of flu has occurred and scientists have partially sequenced the genetic code of the virus. Their goal is to match the genetic sequence with one of the known strains of flu, such as H1N1 or H5N1. By loading the partial sequence into the Hopfield network, which has already stored all the known strains within the neuron connection weightings, the scientists can work out which strain of flu has caused the outbreak. In the image below, we show how the genetic data in terms of the RNA base pairs A, C, G, and U can be stored in neurons of the network. The plot shows a comparison between simulated results of operating the Hopfield network using the conventional approach and our new matrix inversion based approach. Running this algorithm on a quantum processor will also give improvements in storage capacity and operating speed.
We are very excited to uncover improvements to the Hopfield network through quantum mechanics. Yet, there is still more work to be done! The question of how to quickly read in and out data from our quantum device still needs to be addressed.
At the same time, the experimental team here at Xanadu has been working on innovative chip designs and implementations of photonic quantum processors. One of our main objectives is to combine new insights in quantum machine learning with real-world photonic quantum processors. We hope to use the power of laser light within our chip, which can go far beyond the power of even qubits, to give a disruptive impact into machine learning.