Skip to main content
  1. Blog
  2. Article

Canonical
on 11 March 2015

Architecting OpenStack for enterprise reality



With OpenStack becoming more and more popular as a cloud-building technology for enterprises, companies are asking themselves several important questions. How viable is OpenStack as an enterprise platform? Is it possible (and feasible) to integrate it with existing virtualisation infrastructure, e.g. vSphere from VMware? Is there a business case for such integration, and what are the risks and challenges associated with it? Finally, how do they best utilise OpenStack: is the “vanilla” architecture always the best approach, or is there a case for swapping out certain components for third-party tools?

Gigaom analyst Paul Miller looks at these questions and more in this report sponsored by Canonical. For a more in-depth look at integrating vSphere and OpenStack, you may also want to read this whitepaper.

Download eBook

Related posts


Johann Wolf
27 April 2026

Why Web Engineering is great

Ubuntu Article

Like many software engineers, one of my first software development experiences started with creating my own web page. Since that time 20+ years ago, a lot has changed in the web landscape. Having worked a lot in web since then, I’d like to take a moment to reflect on what I think makes web great! ...


Ishani Ghoshal
27 April 2026

Ubuntu 16.04 LTS has reached the end of standard Expanded Security Maintenance with Ubuntu Pro. Here are your options.

Ubuntu Article

Ubuntu 16.04 LTS (Xenial Xerus) reached the end of its five-year Expanded Security Maintenance (ESM) window in April 2026. If you are still running 16.04, it is critical to address your support status to ensure continued security and compliance. Your support options Now that 16.04 is in its Legacy phase, you have two primary paths: ...


Rob Gibbon
27 April 2026

Understanding disaggregated GenAI model serving with llm-d

AI Article

What is llm-d? llm-d is an open source solution for managing high-scale, high-performance Large Language Model (LLM) deployments. LLMs are at the heart of generative AI – so when you chat with ChatGPT or Gemini, you’re talking to an LLM. Simple LLM deployments – where an LLM is deployed to a single server – can ...