Kalpana Kalpana (Editor)

Synnefo

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Developer(s)
  
GRNET

Type
  
Cloud computing

Written in
  
Python

Stable release
  
0.18.1 / October 12, 2016; 5 months ago (2016-10-12)

Repository
  
github.com/oyranos-cms/Synnefo.git

License
  
GNU General Public License

Synnefo is a complete open source cloud stack written in Python that provides Compute, Network, Image, Volume and Storage services, similar to the ones offered by AWS. Synnefo manages multiple Google Ganeti clusters at the backend that handle low-level VM operations and uses Archipelago to unify cloud storage. To boost 3rd-party compatibility, Synnefo exposes the OpenStack APIs to users.

Contents

Synnefo is being developed by GRNET (Greek Research and Technology Network), and is powering two of its public cloud services, the ~okeanos service, which is aimed towards the Greek academic community, and the ~okeanos global service, which is open for all members of the GÉANT network.

History

In November 2006, in an effort to provide advanced cloud services for the Greek academic and research community, GRNET decides to launch a cloud storage service, similar to Amazon's S3, called Pithos. The project is outsourced and opens for public beta to the members of the Greek academic and research community in May 2009.

In June 2010, GRNET decides the next step in this course; to create a complete, AWS-like cloud service (Compute/Network/Volume/Image/Storage). This service, called ~okeanos, aims to provide the Greek academic and research community with access to a virtualized infrastructure that various projects can take advantage of, e.g. experiments, simulations and labs. Given the non-ephemeral nature of the resources that the service provides, the need arises for persistent cloud servers. In search for a solution, in October 2010 GRNET decides to base the service on Google Ganeti and to design and implement all missing parts in-house.

In May 2011, the older Pithos service is rewritten from scratch in-house, with the intention of being integrated to ~okeanos as its storage service. Moreover, the new Pithos adds support for Dropbox-like syncing.

In July 2011, ~okeanos reaches its public alpha stage. This version (v0.5.2.1) includes the Identity, Compute, Network and a primitive Image service. The alpha release of the new, rewritten Pithos follows shortly after, in November 2011. It is marketed as Pithos+ and the old Pithos remains as a separate service. The new Pithos+, though not integrated to ~okeanos yet, provides syncing and sharing capabilities for files, as well as native syncing clients for Mac OS X, iPhone, iPad and Windows.

In March 2012, ~okeanos enters the public alpha2 phase. This version (v0.9) includes a complete integration of the new Pithos as part of ~okeanos and now acts as the unified store for Images and Files. Around this point, in April 2012, the ~okeanos team decides to refer to the whole software stack as Synnefo and starts writing the first version of the Synnefo documentation.

In December 2012, due to interest from other parties to the Synnefo stack, GRNET decides to conceptually separate the ~okeanos and Synnefo projects. Synnefo starts to become a branding-neutral, IaaS cloud computing software, while ~okeanos becomes its real-world application, an IaaS for the Greek academic and research community.

In April 2013, a new Synnefo version (v.013) gets released after a huge cleanup and code refactoring. All separate components are merged under the single Synnefo repository. This is the first release as a unified project, containing all parts (Compute/Network/Volume/Image/Storage).

In Jun 2013, Synnefo v0.14 gets released. Since this version, Synnefo is branding neutral (all remaining ~okeanos references are removed). It also gets a branding mechanism and the corresponding documentation, so that others can adapt it to their branding identity.

Overview

Synnefo has been designed to be deployed in any environment, from a single server to large-scale configurations. All Synnefo components use an intuitive settings mechanism that adds and removes settings dynamically as components are getting added or removed from a physical node. All settings are stored in a single location.

Components

Synnefo is modular in nature and consists of the following components:

Astakos (Identity/Account services)

Astakos is the Identity management component which provides a common user base to the rest of Synnefo. Astakos handles user creation, user groups, resource accounting, quotas, projects, and issues authentication tokens used across the infrastructure. It supports multiple authentication methods:

  • local username/password
  • LDAP / Active Directory
  • SAML 2.0 (Shibboleth) federated logins
  • Google
  • Twitter
  • LinkedIn
  • Users can add multiple login methods to a single account, according to configured policy.

    Astakos keeps track of resource usage across Synnefo, enforces quotas, and implements a common user dashboard. Quota handling is resource type agnostic: Resources (e.g., VMs, public IPs, GBs of storage, or disk space) are defined by each Synnefo component independently, then imported into Astakos for accounting and presentation.

    Astakos runs at the cloud layer and exposes the OpenStack Keystone API for authentication, along with the Synnefo Account API for quota, user group and project management.

    Pithos (File/Object Storage services)

    Pithos is the Object/File Storage component of Synnefo. Users upload files on Pithos using either the Web UI, the command-line client, or native syncing clients. It is a thin layer mapping user-files to content-addressable blocks which are then stored on a storage backend. Files are split in blocks of fixed size, which are hashed independently to create a unique identifier for each block, so each file is represented by a sequence of block names (a hashmap). This way, Pithos provides deduplication of file data; blocks shared among files are only stored once.

    The current implementation uses 4MB blocks hashed with SHA256. Content-based addressing also enables efficient two-way file syncing that can be used by all Pithos clients (e.g. the kamaki command-line client or the native Windows/Mac OS clients). Whenever someone wishes to upload an updated version of a file, the client hashes all blocks of the file and then requests the server to create a new version for this block sequence. The server will return an error reply with a list of the missing blocks. The client may then upload each block one by one, and retry file creation. Similarly, whenever a file has been changed on the server, the client can ask for its list of blocks and only download the modified ones.

    Pithos runs at the cloud layer and exposes the OpenStack Object Storage API to the outside world, with custom extensions for syncing. Any client speaking to OpenStack Swift can also be used to store objects in a Pithos deployment. The process of mapping user files to hashed objects is independent from the actual storage backend, which is selectable by the administrator using pluggable drivers. Currently, Pithos has drivers for two storage backends:

  • files on a shared filesystem, e.g., NFS, Lustre, GPFS or GlusterFS
  • objects on a Ceph/RADOS cluster.
  • Whatever the storage backend, it is responsible for storing objects reliably, without any connection to the cloud APIs or to the hashing operations.

    Cyclades (Compute/Network/Image/Volume services)

    Cyclades is the Synnefo component that implements the Compute, Network, Image and Volume services. It exposes the associated OpenStack REST APIs: OpenStack Compute, Network, Glance and soon also Cinder. Cyclades is the part which manages multiple Ganeti clusters at the backend. Cyclades issues commands to a Ganeti cluster using Ganeti’s Remote API (RAPI). The administrator can expand the infrastructure dynamically by adding new Ganeti clusters to reach datacenter scale. Cyclades knows nothing about low-level VM management operations, e.g., handling of VM creations, migrations among physical nodes, and handling of node downtimes; the design and implementation of the end-user API is orthogonal to VM handling at the backend.

    There are two distinct, asynchronous paths in the interaction between Synnefo and Ganeti. The effect path is activated in response to a user request; Cyclades issues VM control commands to Ganeti over RAPI. The update path is triggered whenever the state of a VM changes, due to Synnefo- or administrator-initiated actions happening at the Ganeti level. In the update path, we monitor Ganeti’s job queue to produce notifications to the rest of the Synnefo infrastructure over a message queue.

    Users have full control over their VMs: they can create new ones, start them, shutdown, reboot, and destroy them. For the configuration of their VMs they can select number of CPUs, size of RAM and system disk, and operating system from pre-defined Images including popular Linux distros (Debian, Ubuntu, CentOS, Fedora, Gentoo, Archlinux, OpenSuse), MS-Windows Server 2008 R2 and 2012 as well as FreeBSD.

    The REST API for VM management, being OpenStack compatible, can interoperate with 3rd party tools and client libraries.

    The Cyclades UI is written in Javascript/jQuery and runs entirely on the client side for maximum responsiveness. It is just another API client; all UI operations happen with asynchronous calls over the API.

    The networking functionality includes dual IPv4/IPv6 connectivity for each VM, easy, platform-provided firewalling either through an array of pre-configured firewall profiles, or through a roll-your-own firewall inside the VM. Users may create multiple private, virtual L2 networks, so that they construct arbitrary network topologie, e.g., to deploy VMs in multi-tier configurations. The networking functionality is exported all the way from the backend to the API and the UI.

    References

    Synnefo Wikipedia