BLOG

May 8, 2018

Xanadu Raises $9M Seed Round to Build Game-Changing Photonic Quantum Computers

By Xanadu

World-Renowned MIT Professor and Quantum Computing Pioneer, Seth Lloyd, joins as Chief Scientific Advisor

TORONTOMay 8, 2018 /CNW/ – Xanadu, a leader in photonic quantum computing, announced a seed round of $9M led by OMERS Ventures with participation from Golden Ventures and Real Ventures. The funds will help capitalize on the rapid development of their unique approach to quantum computing, one that is based on building quantum photonic chips which will enable the creation of the world’s most powerful computer.

Recent scientific breakthroughs have shown that the use of photons —  particles of light — can be harnessed to perform extremely fast, incredibly complex computations, making it possible to solve some of the world’s most pressing computational problems.

With a team of world-class experts in quantum computing and hardware, Xanadu is developing a photonic quantum architecture that can help solve previously intractable problems in machine learning, chemistry, finance, sensing, and drug discovery. The team’s mission is to achieve practical quantum supremacy by building quantum processors based on silicon photonics, where fabrication is achieved using existing foundries, leading to significant cost and production savings.

Seth Lloyd, a Professor at MIT and one of the pioneers of quantum computing, has joined as Xanadu’s Chief Scientific Advisor. “Xanadu’s approach is highly flexible and eminently suitable for performing tasks such as quantum simulation and quantum machine learning. This will allow the scaling up of quantum information processing to tens of thousands of quantum operations within the next few years,” said Lloyd.

Xanadu’s customer-centric approach and focus on productization positions it well when scalability is achieved. “We’ve been customer-driven and solutions-driven from day one,” said Christian Weedbrook, founder and CEO of Xanadu. “Our hardware and software are developed with direct input from forward-thinking businesses that understand the vast potential and benefits of embracing an inevitable quantum world, early on.”

The seed round represents a strong acknowledgement of the fast pace at which quantum computing is developing. “Xanadu is looking to solve tough problems in a truly innovative way, which will have a significant impact across industries globally. We’re excited about Xanadu’s potential and confident in the team’s plans to build practical quantum solutions,” said Sid Paquette, managing partner at OMERS Ventures.

Discover how Xanadu is solving the world’s toughest computational problems at www.xanadu.ai.

About Xanadu
Xanadu is a quantum technologies company powered by light. Xanadu designs and integrates quantum silicon photonic chips into existing hardware to create truly full-stack quantum computing. Their methods will solve today’s toughest business problems significantly faster than ever imagined. www.xanadu.ai

About OMERS Ventures
OMERS Ventures is the venture capital investment arm of OMERS, one of Canada’s largest pension funds with over $95 billion in net assets. OMERS Ventures is a multi-stage investor in growth-oriented, disruptive technology companies across North Americawww.omersventures.com

About Golden Ventures
Golden Ventures is a seed-stage venture capital fund based in Toronto that has been in the trenches as both operators and investors of successful high growth startups. Golden Ventures invest in extraordinary, determined entrepreneurs who must change the world. www.golden.ventures

About Real Ventures
Real Ventures firm invests throughout the life-cycle of early-stage companies and provides support for the founders it backs. Real invests in ambitious game-changing entrepreneurs that are responsible for driving the force behind emerging tech ecosystems. www.realventures.com

SOURCE Xanadu

For further information: Contact Shivanu Thiyagarajah, Xanadu, press@xanadu.ai

May 7, 2018

Dreaming up new materials with quantum computers

By Pierre-Luc Dallaire-Demers

Billions of dollars are devoted to designing, producing, and refining materials and molecules for applications in many sectors of the world economy [1]. With the vast amount of data now available, can we automate the discovery of new materials with tailored physical and chemical properties?

In principle, yes!

One promising solution is to use generative adversarial networks [2]. As shown in the figure, the idea is simple: train a neural network — called the generator — to output candidate materials with certain desired properties. To give the network some flexibility, we also provide some unstructured, randomly chosen inputs. Its task is to convert these unstructured random inputs into a selection of new materials with the target properties. Of course, the generator does not physically create the materials, but rather simulates them. In addition to proposing materials, the generator can also provide the physical protocol for fabricating them.

A second neural network — called the discriminator — is tasked with training the generator. The discriminator judges whether a given example comes from real experimental data or from the generator. If the data comes from a real source, the discriminator outputs ‘real’ and if the data comes from the generator, it outputs ‘fake’.

 

Illustration of the workflow used to train a quantum generative adversarial network with the goal, in this example, of creating new materials for solar cells.

 

Illustration of the workflow used to train a quantum generative adversarial network with the goal, in this example, of creating new materials for solar cells.
The generator neural network is trained in an adversarial game with the discriminator. The generator must fool the discriminator into wrongly classifying its fake outputs as ‘real’. At the beginning of the training, it is easy to imagine that the generator only outputs random information and the discriminator randomly classifies its input as ‘real’ or ‘fake’. However, the discriminator can improve its performance since the real data — which can be highly structured or even human-readable text — is highly distinguishable from the strings of random bits at the output of the untrained generator. As the discriminator starts to distinguish between random noise and structured data, the generator can follow the gradient of the discriminator, learning to generate data which is more likely to fool it. At this point, the discriminator has to discover other features to accomplish its task and the generator continues the adversarial game by improving its ability to generate those features. In theory, at the end of the game, the generator will produce materials which closely resemble the data used for training. By changing the input parameters, the generator can be used to create completely novel materials, as well as detailing their fabrication process!

This whole idea could work in principle, but in practice we would run into a very fundamental problem. The properties of the materials and molecules in our universe are determined by their microscopic constituents, which obey the laws of quantum mechanics. Ultimately, the stability of matter itself is a direct consequence of the fact that the physical properties of fundamental particles like electrons and nuclei are not expressed with probabilities, but with complex probability amplitudes. The interference of electronic amplitudes around nuclei is the mechanism which ensures that electrons do not collapse onto nuclei, countering the electrostatic attraction. The properties of the interference pattern determine, to various degrees, the large scale properties of a given material: whether it is an insulator, a conductor, a semiconductor, a superconductor, any kind of magnets. In fact, all properties of all matter, from the smallest molecules all the way up to neutron stars, are determined by interference! Nevertheless, accurately computing the interference patterns of complex probability amplitudes is difficult. In fact, it is so difficult that we must double the size of a computer each time we want to add a new quantum particle to a simulation. Therefore, it would be extremely difficult for a generative adversarial network coded on a classical computer to synthesize fundamentally quantum phenomena and generate revolutionary new molecules.

This difficulty can be turned into a powerful lever if we instead try to use the quantum behavior of the building blocks of matter to execute quantum calculations [3]. This is precisely why we are building a quantum computer at Xanadu. Universal quantum computers can simulate all physical phenomena in a way which is exponentially more efficient than current-day classical computers. By using universal quantum circuits, it is possible to extend classical adversarial networks to the quantum domain and to unlock the full power of quantum computers. In two new papers [4,5] with my colleagues Nathan Killoran, Seth Lloyd, Christian Weedbrook and I, we define these ideas for the first time: Quantum generative adversarial networks and quantum adversarial learning!

This kind of algorithm, being fundamentally quantum in nature, has the potential to be exponentially more efficient than its classical counterpart at representing and generating highly correlated data such as for our example in materials design. We can also imagine a future where more complicated optimization tasks would include minimization of production and logistical costs, as well as macro-economical quantities such as market supplies and the price of various financial derivatives. We expect that quantum machine learning will also be leveraged to improve our ability to understand the training of these networks.

[1] Aspuru-Guzik, A., Lindh, R. and Reiher, M., 2018. The Matter Simulation (R)evolution. ACS central science, 4(2), pp.144–152.

[2] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y., 2014. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680.

[3] Feynman, R.P., 1982. Simulating physics with computers. International journal of theoretical physics, 21(6–7), pp.467–488.

[4] Dallaire-Demers, P.L. and Killoran, N., 2018. Quantum generative adversarial networks. arXiv preprint arXiv:1804.08641.

[5] Lloyd, S. and Weedbrook, C., 2018. Quantum generative adversarial learning. arXiv preprint arXiv:1804.09139.

April 11, 2018

Introducing Strawberry Fields

By Josh Izaac

Here at Xanadu, we have some of the brightest minds tackling the problem of practical quantum computation. As a full-stack quantum startup, we take a three-pronged approach to quantum computing:

  • Hardware: Our experimental physicists are working around the clock to develop an on-chip quantum photonic processor. Stay tuned — we’ll have some exciting news to share soon.
  • Applications and algorithms: Our algorithms team has already started publishing results across diverse fields such as quantum chemistry, graph theory, machine learning and more, while simultaneously making advances in photonic quantum computing. Check out our recent papers if you haven’t already.
  • Software and simulationsUnderpinning the work of our algorithms team is the ability to easily simulate our quantum photonics system. This is a hugely important component, allowing us to quickly flesh out ideas, and discover and probe interesting and unexpected behaviour.

As part of these efforts, we have developed a full stack software solution for simulating quantum photonics and continuous variable quantum computing. And that’s not all — we have also integrated support for Tensorflow, creating a framework that combines the latest advances in deep learning and machine learning with quantum computation.

The best part? Our framework is now open-sourced and available for anyone to use and play with.

Introducing Strawberry Fields and Blackbird.

An open-source full-stack quantum software platform for photonic quantum computing.

Using Strawberry Fields to simulate quantum teleportation from scratch.
  • Implemented in Python for ease-of-use, Strawberry Fields is specifically targeted to continuous-variable quantum computation. Quantum circuits are written using the easy-to-use and intuitive Blackbird quantum programming language.
  • Powers the Strawberry Fields Interactive web app, which allows anyone to run a quantum computing simulation via drag and drop. Quantum computing has never been simpler.
  • Includes a suite of quantum simulators implemented using NumPy and Tensorflow — these convert and optimize Blackbird code for classical simulation.
  • Future releases will target experimental backends, including photonic quantum computing chips.

These last two features are the most thrilling to us here at Xanadu. Not only is Strawberry Fields the first quantum computing simulator to include gradient-based optimization of quantum circuits — designed to be intuitive even without a background in machine learning — soon, you’ll be able to run quantum experiments directly on our quantum photonics chip.

Simulation and physical experiments: all from the same piece of code.

What can I use it for?

Whatever you like! The sky is the limit.

Pushing the theoretical limits of quantum computation
Strawberry Fields is ideal for studying existing algorithms, or quickly prototyping new ideas and breakthroughs.

Designing and prototyping quantum photonics
Need to design a photonics experiment before committing to buying expensive components? Perhaps you’d like to optimize a photonics set-up, to make maximum use of the components you already have available.

Exploration and design of novel quantum circuits
On the other hand, do you know the output you need, but you’re not sure how exactly to get there? Exploit the built-in Tensorflow support and use deep learning to design and optimize circuits.

If you find yourself in a situation where you need additional features for your research, get in touch with us — Strawberry Fields is still under heavy development, and we are always open to hearing how we can make it a more integral part of your research workflow.

Okay, you’ve convinced me. How do I start?

To see Strawberry Fields in action immediately, try out our Strawberry Fields Interactive web application. Prepare your initial states, drag and drop gates, and watch your simulation run in real time right in your web browser.

To take full advantage of Strawberry Fields, however, you’ll want to use the Python library. The best place to start is our documentation — we have put together an extensive selection of pages discussing continuous-variable quantum theory, quantum algorithms, and of course installation instructions and details of the Strawberry Fields API. This is supplemented by an array of tutorials; starting from the introductory (a basic guide to quantum teleportation) to the more advanced (machine learning and gradient-based optimization of quantum circuits).

You can also check out the source code directly on GitHub — the issue tracker is a great place to leave any feedback or bug reports. Alternatively, if you’d like to contribute directly, simply fork the repository and make a detailed pull request.

For more technical details regarding the Strawberry Fields architecture, be sure to read our whitepaper.

***

It is difficult to overstate just how excited we are. Strawberry Fields is the accumulation of months of hard work, and gives us the chance to share our progress with the quantum computing community.

But this is just the start — we have a ton of exciting projects in the pipeline. Watch this space.

March 28, 2018

Using quantum machine learning to analyze data in infinite-dimensional spaces

By Maria Schuld

The latest Xanadu research paper proposes a novel perspective on quantum machine learning that sounds crazy at first sight. The core idea is to use the Hilbert space of a quantum system to analyze data. The Hilbert space is the place where the states that describe a quantum systems live, and it is a very large place indeed. For a 50-qubit quantum computer, we are talking about a 1,125,899,907,000,000-dimensional space, and for a single mode of a continuous-variable quantum computer, the Hilbert space has an infinite number of dimensions. So how can we analyze data in such a Hilbert space if we have no chance to ever visit it, let alone to perform computations in it?

Kernel methods implicitly embed data into a higher dimensional feature space, where we can hope that it gets easier to analyze.

In fact, machine learning practitioners have been doing this kind of thing for decades when using the beautiful mathematical theory of kernel methods [1]. Kernels are functions that compute a distance measure between two data points, for example between two images or text documents. We can build machine learning models from kernels, the most famous being the support vector machine or Gaussian processes. It turns out that every kernel is related to a large — and sometimes infinite-dimensional — feature space. Computing the distance measure of two data points is equivalent to embedding these data points into the feature space and computing the inner product of the embedded vectors. In a sense, this is the opposite to neural networks, where we compress the data to extract a few features. Here, we effectively ‘blow up’ the data to make it potentially easier to analyze.

Mapping inputs into a large space and computing inner products is something that quantum computers can do rather easily. And any device that can encode a data point into a quantum state (which is really almost any quantum device), and which can estimate the overlap of two quantum states, can compute a kernel. Kernel methods are therefore a strikingly elegant approach to quantum machine learning. What is more, if the data encoding strategy is complex enough, we might even find cases where no classical computer could ever compute that same kernel. If we can show that our “quantum kernel” is useful for learning, we have a recipe for a quantum-assisted machine learning algorithm that is impossible to do classically: use the quantum device as a special-purpose estimator for kernel functions, and feed these estimates into a classical computer where a kernel method is trained and used for predictions. Voila!

A quantum-assisted support vector machine finds useful decision boundaries for small datasets. The kernel of the support vector machine is the inner product of 2-mode squeezed states, where the phase of the squeezing depends on the input data.

But the story does not end there. Quantum computing can actually be used to analyze data directly in feature space, without relying on the convenient detour via kernels. This idea has been successfully used for quantum-inspiredmachine learning with tensor networks (check out this great paper [2] and its successors), and now we want real quantum systems to do the job. For this, we use a variational circuit to define a linear model in Hilbert space.

To explain this in more detail, consider as an example the binary classification problem of the figure above, where we have to draw a line — a decision boundary — between two classes of data. We can encode data points x into a quantum state |ϕ(x)>, which effectively maps it to a vector in Hilbert space. In a continuous-variable system, this vector is an infinite dimensional Fock state. A unitary transformation W applied to the quantum state is nothing else than a linear model with regards to that vector. With a bit of post-processing, defines a linear decision boundary, or hyperplane, to separate the data in Hilbert space. From support vector machines, we know that a linear model is very well suited to analyze data in a feature space.

We can make the circuit depend on a set of parameters, W=W(θ) and train it to find the best linear decision boundary. These variational circuits have recently become a booming area of research in quantum machine learning [3,4,5]. With the theory of kernel methods, the approach of training circuits is enriched by a theoretical interpretation that can be used to guide our attempts of building powerful classifiers.

A quantum circuit (top) and its graphical representation as a neural network (bottom). Encoding a data point into optical modes maps it to an infinite-dimensional vector which can be interpreted as the hidden layer of a neural network. A variational quantum circuit together with measurements can then be used to extract two outputs from this layer, which are further processed to a binary prediction.

To summarize, using the Hilbert space of a quantum system for data analysis gives us a theoretical framework that can guide the development of quantum machine learning algorithms. It defines a potential road to show so-called “quantum supremacy’’ for real-life applications. Whether we can find cases in which this approach leads to useful classifiers is an exciting open question.

[1] B. Schoelkopf and A. Smola, Learning with Kernels, MIT Press, Cambridge, MA (2002).

[2] M. Stoudenmire and D. Schwab, Advances In Neural
Information Processing Systems, pp. 4799–4807 (2016).

[3] G. Verdon, M. Broughton, and J. Biamonte, arXiv:1712.05304 (2017).

[4] E. Farhi and H. Neven, arXiv:1802.06002 (2018).

[5] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, arXiv:1803.00745 (2018).

February 8, 2018

Making a Neural Network, Quantum

By Tom Bromley

Hello world, we are in Xanadu.

We work to manufacture the world’s first all on-chip photonic quantum processor, using cutting edge techniques to harness powerful properties of light. The purpose of this blog is to keep you updated with our progress. From exciting new findings to testing challenges, and everything in between, we will keep you in tune with the latest in the world of quantum tech.

Quantum machine learning is one of the primary focuses at Xanadu. Our machine learning team is strengthening the connections between artificial intelligence and quantum technology. In this blog post we discuss how a neural network can be made quantum, potentially giving huge increases in operating speed and network capacity. This post will require no prior scientific or mathematical background, even if you’ve never heard of a neural network – read on! For more details, a paper explaining these findings is available here.

Neural networks

You have probably benefited from machine learning today. And yesterday. As well as the day before. Machine learning is becoming increasingly embedded in our daily routine. If you have checked a social media account, performed an online search, or even commuted to work, a distant remote server may have shaped your experience using a wide range of learning algorithms. The objective of machine learning is to give computers the power to make predictions and generalizations from data without explicitly telling them how to do so. It is an extremely exciting and fastly evolving area; take a look at a beginner’s introduction.

A very successful approach in machine learning is to design an artificial neural network, which is inspired by the structure of neurons in the brain. Imagine a collection of points, that can each be in one of two states: “on” or “off”. These points are interconnected with wires of variable strength, as shown in the diagram below. The network is operated by allowing each neuron to decide its state based upon the states of the neurons connected to it, also bearing in mind the strength of the connections. One of the advantages of neural networks is the ability to choose the structure based upon the problem; see here to appreciate the neural network zoo! Neural networks have been used for a variety of applications, including voice recognition and cancer detection.

Quantum neural networks

So where can quantum technology help? At Xanadu, we have been looking at how to embed a type of neural network into a quantum system. Our first step is to use a property called quantum coherence, where a system can concurrently exist in a combination of states – in what we call a coherent superposition. The trick is then to associate each neuron with a state of the system: if the neuron is “on” then its corresponding state appears with a positive sign in the superposition, while if the neuron is “off” then there is a negative sign in the superposition. We have focused on systems of multiple quantum bits (qubits), each of which can either be “up” or “down”. By looking at all of the combinations of “up” and “down” possible in our collection of qubits, you can see that an exponential number of neuron configurations can be stored within a small number of qubits. For example, the diagram below shows that we can store any configuration of 4 neurons in only 2 qubits!

By using this way of embedding neurons within qubits, as well as accessing an increased storage capacity we unlock access to a huge number of quantum algorithms that can help us to speed up processing the network. The first question here is to choose the structure of the neural network, so that we can know which quantum algorithm is best suited to give us a performance advantage. This post focuses on the Hopfield network, which is a structure where all of the neurons are connected to each other with variable weights (forming a complete graph). The Hopfield network can be used as a content addressable memory system: configurations of the neurons are associated to patterns (for example, images), which are stored by altering the weights of the connections between neurons. This is known as Hebbian learning. New patterns can then be loaded into the Hopfield network, which is processed with the objective of recovering the most similar pattern stored in memory.

The conventional way of operating the Hopfield network is to keep picking neurons at random and updating them by considering the connected neurons, along with their weights. One of our insights is to realize that the Hopfield network can instead be run in a single step by inverting a matrix containing information on the weights between all neurons. Then, using the embedding into qubits discussed above, we can turn to the famous quantum HHL algorithm to process the Hopfield network. The HHL algorithm can invert a matrix in an exponentially fast time when compared to the best algorithms running on standard computers. However, to exploit the HHL algorithm we need to be able to do something called Hamiltonian simulation of our matrix.

The Hamiltonian of a quantum system governs how it naturally evolves in time. Hamiltonian simulation is therefore the art of making a quantum system evolve in a controlled way so that its evolution is as close as possible to the given Hamiltonian. One of the novel techniques that we have developed is a method of Hamiltonian simulation for the Hopfield network matrix. This is achieved by repetitively “partial swapping” in batches of the memory patterns to be stored. By “partial swapping,” we mean that our qubits are partly swapped with another bank of qubits holding sequences of the memory patterns. This construct can be thought of as the quantum analogue of Hebbian learning (qHeb), and we will be releasing a paper with more details shortly. A diagram summarizing our quantum approach for the Hopfield network is given below. We call our quantum routine qHop, which uses the quantum subroutine qHeb.

So, how can this help?

Encoding a neural network within qubits gives an exponential advantage in storage capacity, while the algorithms qHop and qHeb team up to give an exponential increase in processing speed. This means that we expect to run larger neural networks faster on a quantum processor than we could do using a standard computer. The Hopfield network itself has an application as a pattern recognition system, as well for solving the travelling salesman problem; read this book for a very clear explanation.

We have highlighted in particular the application of the Hopfield network within genetics as a recognizer of infectious diseases. Imagine that an outbreak of flu has occurred and scientists have partially sequenced the genetic code of the virus. Their goal is to match the genetic sequence with one of the known strains of flu, such as H1N1 or H5N1. By loading the partial sequence into the Hopfield network, which has already stored all the known strains within the neuron connection weightings, the scientists can work out which strain of flu has caused the outbreak. In the image below, we show how the genetic data in terms of the RNA base pairs A, C, G, and U can be stored in neurons of the network. The plot shows a comparison between simulated results of operating the Hopfield network using the conventional approach and our new matrix inversion based approach. Running this algorithm on a quantum processor will also give improvements in storage capacity and operating speed.

What’s next?

We are very excited to uncover improvements to the Hopfield network through quantum mechanics. Yet, there is still more work to be done! The question of how to quickly read in and out data from our quantum device still needs to be addressed.

At the same time, the experimental team here at Xanadu has been working on innovative chip designs and implementations of photonic quantum processors. One of our main objectives is to combine new insights in quantum machine learning with real-world photonic quantum processors. We hope to use the power of laser light within our chip, which can go far beyond the power of even qubits, to give a disruptive impact into machine learning.

Stay posted for more breakthroughs!

Xanadu HQ

Xanadu