pax_global_header 0000666 0000000 0000000 00000000064 12545176055 0014524 g ustar 00root root 0000000 0000000 52 comment=f755d686b7dab54ed4e9df69aaea1e4e2f936a67
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/ 0000775 0000000 0000000 00000000000 12545176055 0020751 5 ustar 00root root 0000000 0000000 cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/.gitignore 0000664 0000000 0000000 00000000075 12545176055 0022743 0 ustar 00root root 0000000 0000000 *.pyc
doc/_build/*
*.swp
*.swo
*.log
test_*.py
.ropeproject/
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/COPYRIGHT 0000664 0000000 0000000 00000000040 12545176055 0022236 0 ustar 00root root 0000000 0000000 Copytight © 2010-2012 Smartjog
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/LICENSE 0000664 0000000 0000000 00000016743 12545176055 0021771 0 ustar 00root root 0000000 0000000 GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/MANIFEST.in 0000664 0000000 0000000 00000000060 12545176055 0022503 0 ustar 00root root 0000000 0000000 recursive-include etc *
recursive-include doc *
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/README 0000664 0000000 0000000 00000000000 12545176055 0021617 0 ustar 00root root 0000000 0000000 cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/bin/ 0000775 0000000 0000000 00000000000 12545176055 0021521 5 ustar 00root root 0000000 0000000 cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/bin/cc-addaccount 0000775 0000000 0000000 00000006150 12545176055 0024141 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
#coding=utf8
# This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
'''
Script used to create an account on cc-server account directory.
'''
import logging
import logging.handlers
import os
from getpass import getpass
from pwd import getpwnam
from grp import getgrnam
from optparse import OptionParser
from cloudcontrol.server.conf import CCConf
DEFAULT_ACCOUNT_DIRECTORY = '/var/lib/cc-server/'
DEFAULT_ROLE = 'cli'
UMASK = 0o0177
DEFAULT_CHOWN_USER = 'cc-server'
DEFAULT_CHOWN_GROUP = 'cc-server'
if __name__ == '__main__':
op = OptionParser(usage='%prog [options] login')
op.add_option('-d', '--directory', default=DEFAULT_ACCOUNT_DIRECTORY,
help='account directory')
op.add_option('-p', '--password', action='store_true',
help='ask for the password')
op.add_option('-c', '--copy', default=None,
help='copy this already existing account')
op.add_option('-r', '--role', default=None, choices=('cli', 'hv', 'host'),
help='specify the role (default %default)')
op.add_option('-u', '--user', default=DEFAULT_CHOWN_USER,
help='User running cc-server (default %default)')
op.add_option('-g', '--group', default=DEFAULT_CHOWN_GROUP,
help='Group running cc-server (default %default)')
options, args = op.parse_args()
if len(args) != 1:
op.error('a login must be provided')
if options.role is not None and options.copy is not None:
op.error('you can\'t specify a role for a copy')
if options.role is None:
role = DEFAULT_ROLE
else:
role = options.role
logger = logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
logger.addHandler(handler)
conf = CCConf(logger, options.directory)
if options.password:
password = getpass('Password: ')
password_again = getpass('Password (again): ')
if password != password_again:
op.error('password mismatch')
elif not password:
op.error('no password provided')
else:
password = None
os.umask(UMASK)
if options.copy is None:
conf.create_account(args[0], role, password)
else:
conf.copy_account(options.copy, args[0], password)
# Chown the files:
uid = getpwnam(options.user).pw_uid
gid = getgrnam(options.group).gr_gid
filename = os.path.join(options.directory, '%s.json' % args[0])
os.chown(filename, uid, gid)
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/bin/cc-server 0000775 0000000 0000000 00000016253 12545176055 0023347 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
#coding=utf8
# This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
from cloudcontrol.common.helpers.logger import patch_logging; patch_logging()
import os
import sys
import atexit
import logging
import logging.handlers
import ConfigParser
import signal
from optparse import OptionParser
from pwd import getpwnam
from grp import getgrnam
from daemon import DaemonContext
from cloudcontrol.server.server import CCServer
from cloudcontrol.server import __version__
DEFAULT_CONFIG_FILE = '/etc/cc-server.conf'
DEFAULT_UMASK = 0o0177
DEFAULT_CONFIGURATION = {
'daemonize': False,
'user': '',
'group': '',
'pidfile': '',
'umask': '077',
'port': 1984,
'debug': False,
'account_db': None, # None = mandatory option
'interface': '127.0.0.1',
'ssl_cert': None,
'ssl_key': None,
'maxcon': 600,
'maxidle': 120,
}
class EncodingFormatter(logging.Formatter, object):
def __init__(self, fmt=None, datefmt=None, encoding='utf-8'):
super(EncodingFormatter, self).__init__(fmt, datefmt)
self._encoding = encoding
def formatException(self, ei):
expt = super(EncodingFormatter, self).formatException( ei)
if isinstance(expt, str):
expt = expt.decode(self._encoding, 'replace')
return expt
def format(self, record):
msg = super(EncodingFormatter, self).format(record)
if isinstance(msg, unicode):
msg = msg.encode(self._encoding, 'replace')
return msg
def run_server(options):
# Setup logging facility:
level = logging.INFO
if options['debug'] in (True, 'yes', 'true'):
level = logging.DEBUG
logger = logging.getLogger()
logger.setLevel(level)
if options['stdout']:
handler = logging.StreamHandler()
fmt = EncodingFormatter('[%(asctime)s] '
'\x1B[30;47m%(name)s\x1B[0m '
'\x1B[30;42m%(levelname)s\x1B[0m: '
'%(message)s')
else:
facility = logging.handlers.SysLogHandler.LOG_DAEMON
handler = logging.handlers.SysLogHandler(address='/dev/log',
facility=facility)
fmt = EncodingFormatter('%(name)s: %(levelname)s %(message)s')
handler.setFormatter(fmt)
logger.addHandler(handler)
server = CCServer(logger.getChild('cc-server'),
conf_dir=options['account_db'],
maxcon=int(options['maxcon']),
maxidle=int(options['maxidle']),
port=int(options['port']),
address=options['interface'],
keyfile=options['ssl_key'],
certfile=options['ssl_cert'])
def shutdown_handler(signum, frame):
'''
Handler called when SIGINT is emitted.
'''
server.rpc.shutdown()
logging.info('Server properly exited by SIGINT')
watcher_sigint = server.rpc.loop.signal(signal.SIGINT, shutdown_handler)
watcher_sigstop = server.rpc.loop.signal(signal.SIGTERM, shutdown_handler)
watcher_sigint.start()
watcher_sigstop.start()
try:
server.run()
except Exception as err:
logging.critical('Server failed: %s', err)
import traceback
traceback.print_exc(file=sys.stdout)
sys.exit(3)
if __name__ == '__main__':
op = OptionParser(version='%%prog v%s' % __version__)
op.add_option('-c', '--config', default=DEFAULT_CONFIG_FILE,
help='configuration file (default: %default)')
op.add_option('-d', '--daemonize', action='store_true',
help='run as a daemon')
op.add_option('-f', '--foreground', action='store_false', dest='daemonize',
help='don\'t run as a daemon')
op.add_option('-p', '--pidfile', help='write pidfile to the path')
op.add_option('-k', '--umask', help='set the umask of the process')
op.add_option('-u', '--user', help='run as user')
op.add_option('-g', '--group', help='run as group')
op.add_option('-s', '--stdout', action='store_true', default=False,
help='log in stdout instead of syslog')
cliopts, args = op.parse_args()
# Reading the config file:
config = ConfigParser.SafeConfigParser()
config.read(cliopts.config)
try:
options = dict(config.items('server'))
except ConfigParser.NoSectionError:
sys.stderr.write("Configuration error: 'server' section must exist "
"in '%s'\n" % cliopts.config)
sys.exit(1)
# Applying default config file options:
for opt, default in DEFAULT_CONFIGURATION.iteritems():
if opt not in options or not options[opt]:
if default is None:
sys.stderr.write("Configuration error: you must specify '%s' "
"option in '%s' !\n" % (opt, cliopts.config))
sys.exit(1)
else:
options[opt] = default
# Merge cli options and .conf file options:
for opt in ('daemonize', 'pidfile', 'umask', 'user', 'group', 'stdout'):
if getattr(cliopts, opt) is not None:
options[opt] = getattr(cliopts, opt)
# Create option set for the daemonization process:
daemon_opts = {}
daemonize = options['daemonize']
if isinstance(daemonize, str):
daemonize = daemonize in ('yes', 'true')
daemon_opts['detach_process'] = daemonize
daemon_opts['umask'] = int(options['umask'], 8)
if options['user']:
if options['user'].isdigit():
daemon_opts['uid'] = int(options['user'])
else:
daemon_opts['uid'] = getpwnam(options['user']).pw_uid
if options['group']:
if options['group'].isdigit():
daemon_opts['gid'] = int(options['group'])
else:
daemon_opts['gid'] = getgrnam(options['group']).gr_gid
if not daemonize:
daemon_opts['stderr'] = sys.stderr
daemon_opts['stdout'] = sys.stderr
# I've to write myself the pidfile because the daemon library write it
# after the privilege downgrade:
if options['pidfile']:
pidfile = open(cliopts.pidfile, 'w')
daemon_opts['files_preserve'] = [pidfile]
else:
pidfile = None
with DaemonContext(**daemon_opts):
if pidfile is not None:
pidfile.write('%s' % os.getpid())
pidfile.flush()
@atexit.register
def clean_pidfile():
pidfile.seek(0)
pidfile.truncate()
pidfile.flush()
run_server(options)
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/ 0000775 0000000 0000000 00000000000 12545176055 0023460 5 ustar 00root root 0000000 0000000 cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/__init__.py 0000664 0000000 0000000 00000001360 12545176055 0025571 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
__import__('pkg_resources').declare_namespace(__name__)
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/ 0000775 0000000 0000000 00000000000 12545176055 0024766 5 ustar 00root root 0000000 0000000 cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/__init__.py 0000664 0000000 0000000 00000001367 12545176055 0027106 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" CloudControl server libraries.
"""
__version__ = '25~dev'
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/allocator.py 0000664 0000000 0000000 00000022270 12545176055 0027323 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" This module contains the hypervisor allocation algorithm.
"""
from collections import defaultdict
from cloudcontrol.server.utils import itercounter
class AllocationError(Exception):
""" Exception raised when an error occurs while allocating hypervisor
to a virtual machine.
"""
# Target filters:
class Filter(object):
def __init__(self, logger, vmspec, server, client):
self.logger = logger
self.vmspec = vmspec
self.server = server
self.client = client
def tql_filter(self, query):
return query
def filter(self, candidates):
for candidate in candidates:
yield candidate
def sorter(self, candidate):
return None
class IsVmUnique(Filter):
""" Raise an allocation error if a VM with the same title already exists.
"""
def filter(self, candidates):
title = self.vmspec.get('title')
if title:
vms = self.server.list('r=vm&t="%s"' % title)
if vms:
raise AllocationError('A virtual machine with the same title already exists')
for candidate in candidates:
yield candidate
class TargetFilter(Filter):
""" Filter on targeted hypervisors.
"""
def tql_filter(self, query):
if 'target' in self.vmspec:
query = '(%s)&%s' % (query, self.vmspec['target'])
return query
class IsAllocatable(Filter):
""" Filter the administratively non-allocatable hypervisors.
"""
def tql_filter(self, query):
return '(%s)&alloc=yes' % query
class IsConnected(Filter):
""" Filter the unconnected hypervisors.
"""
def tql_filter(self, query):
return '(%s)&con' % query
class HaveEnoughMemory(Filter):
""" Filter hypervisors with not enough allocatable memory for the VM.
"""
def tql_filter(self, query):
if 'do_not_check_memory' not in self.vmspec.get('flags', []):
return '(%s)&memremaining>%s' % (query, self.vmspec['memory'])
else:
return '(%s)&memremaining' % query
def sorter(self, candidate):
try:
return -float(candidate.get('memremaining'))
except ValueError:
return float('inf')
class HaveEnoughStorage(Filter):
""" Filter hypervisors with not enough storage for the VM.
"""
DEFAULT_VG = 'local'
def tql_filter(self, query):
# Compute the total size per VG:
size_by_vg = defaultdict(lambda: 0)
for volume in self.vmspec.get('volumes', []):
size_by_vg[volume.get('pool', self.DEFAULT_VG)] += volume['size']
# Generate the TQL query:
tql = ''
for vg, size in size_by_vg.iteritems():
tql += '&sto%s_free>=%s' % (vg, size)
return '(%s)%s' % (query, tql)
class HaveEnoughCPU(Filter):
""" Filter hypervisors with not enough CPU for the VM, according to the
overcommit policy.
"""
DEFAULT_ALLOWED_RATIO = 1
def tql_filter(self, query):
return '(%s)$cpualloc$cpu$cpuallowedratio$cpuremaining$cpuallocratio' % query
def filter(self, candidates):
cpu = int(self.vmspec['cpu'])
for candidate in candidates:
ratio = (float(candidate.get('cpualloc')) + cpu) / float(candidate.get('cpu'))
if ratio <= float(candidate.get('cpuallowedratio', self.DEFAULT_ALLOWED_RATIO)):
yield candidate
def sorter(self, candidate):
try:
return float(candidate.get('cpuallocratio'))
except ValueError:
return float('inf')
class SatisfyRiskGroups(Filter):
""" Complies with risk groups.
"""
def tql_filter(self, query):
tags = ''
for tag in self.vmspec.get('riskgroup', {}):
tags += '$%s' % tag
return '(%s)%s' % (query, tags)
def filter(self, candidates):
riskgroup = self.vmspec.get('tags', {}).get('riskgroup')
riskgroup_props = self.vmspec.get('riskgroup')
if riskgroup and riskgroup_props:
# Get the list of VMs within the riskgroup:
vms = self.server.list('r=vm&riskgroup="%s"$p' % riskgroup)
count_per_hv = defaultdict(lambda: 0)
# Store count per riskgroup as instance attribute as we will also use
# it in sorting step. Note that filter will ALWAYS be called before
# a sorting operation.
self.count_per_riskgroup = dict((x, defaultdict(lambda: 0)) for x in riskgroup_props)
if vms:
# Produce the mapping between the hypervisor and the number of VMs
# in the riskgroup:
for vm in vms:
count_per_hv[vm['p']] += 1
# Generate the set of hv hosting these VMs:
hv = set('id=%s' % vm['p'] for vm in vms)
# Generate the TQL matching this list of hv:
hv_tql = '&'.join(hv)
# Generate the list of tags to show on this list of hv:
hv_tql_show = ''
for k in riskgroup_props:
hv_tql_show += '$%s' % k
# Assemble the TQL query:
tql = '(%s)%s' % (hv_tql, hv_tql_show)
# Execute the query and retrieve the full list of hv hosting
# the VMs within the riskgroup:
hvs = self.server.list(tql)
# Count the number of vm per riskgroup tag:
for hv in hvs:
for tag in riskgroup_props:
self.count_per_riskgroup[tag][hv[tag]] += count_per_hv[hv['id']]
# Yield only hv which have not reached riskgroup limits:
for hv in candidates:
for tag, limit in riskgroup_props.iteritems():
if self.count_per_riskgroup[tag].get(hv[tag], 0) >= limit:
break
else:
yield hv
else:
for hv in candidates:
yield hv
def sorter(self, candidate):
orders = []
if 'riskgroup' in self.vmspec:
for tag in self.vmspec['riskgroup']:
orders.append(self.count_per_riskgroup[tag].get(candidate[tag], 0))
return orders
class Allocator(object):
BASE_TARGET_TQL = 'r=hv'
DEFAULT_FILTERS = [IsAllocatable, TargetFilter, IsVmUnique, IsConnected,
SatisfyRiskGroups, HaveEnoughCPU, HaveEnoughMemory,
HaveEnoughStorage]
def __init__(self, logger, server, client, filters=DEFAULT_FILTERS):
self.logger = logger
self.server = server
self.client = client
self.filters = filters
def allocate(self, vmspec, tql_target):
# Instanciate filters:
filters = [f(self.logger.getChild(f.__name__), vmspec, self.server, self.client) for f in self.filters]
self.logger.info('Looking for candidates for vmspec: %r', vmspec)
# Generate the TQL query to select target hypervisors:
tql = self.BASE_TARGET_TQL
if tql_target:
tql = '(%s)&%s' % (tql, tql_target)
for filter in filters:
tql = filter.tql_filter(tql)
# Get the list of candidates according to the TQL query:
self.logger.debug('Querying candidates: %s', tql)
candidates = self.client.list(tql, method='allocate')
self.logger.debug('Got %d candidates to filter', len(candidates))
def _cb_logger_debug(count, message):
self.logger.debug(message, count)
# Filter the list of candidates:
for filter_ in filters:
candidates = itercounter(candidates, _cb_logger_debug,
'%s: %%d candidates In' % filter_.__class__.__name__)
candidates = filter_.filter(candidates)
candidates = itercounter(candidates, _cb_logger_debug,
'%s: %%d candidates Out' % filter_.__class__.__name__)
# Sort the candidates:
def sorter(candidate):
return [f.sorter(candidate) for f in filters]
candidates = sorted(candidates, key=sorter)
for i, candidate in enumerate(candidates):
sorters = ['%s: %s' % (f.__class__.__name__, f.sorter(candidate)) for f in filters]
self.logger.debug('Candidate %d %s -> %s', i, candidate['id'], ' '.join(sorters))
# Select the first matching candidate:
if candidates:
return [x['id'] for x in candidates]
else:
raise AllocationError('No candidate found for %s' % vmspec.get('title', repr(vmspec)))
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/clients/ 0000775 0000000 0000000 00000000000 12545176055 0026427 5 ustar 00root root 0000000 0000000 cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/clients/__init__.py 0000664 0000000 0000000 00000034360 12545176055 0030546 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" Connected client management package.
This package store classes representing each client's role and the associated
sjRPC handler.
"""
from functools import partial
from datetime import datetime
from sjrpc.utils import ConnectionProxy
from sjrpc.core import RpcError
from cloudcontrol.server.handlers import CCHandler, listed
from cloudcontrol.server.exceptions import RightError
from cloudcontrol.server.db import RemoteTag
from cloudcontrol.common.tql.db.tag import StaticTag, CallbackTag
from cloudcontrol.common.tql.db.object import TqlObject
class CCServerConnectionProxy(ConnectionProxy):
""" RPC proxy used to add the node login in the exception message.
"""
def __init__(self, login, *args, **kwargs):
super(CCServerConnectionProxy, self).__init__(*args, **kwargs)
self._login = login
def __getattr__(self, name):
def func(*args, **kwargs):
try:
return super(CCServerConnectionProxy, self).__getattr__(name)(*args, **kwargs)
except RpcError as err:
err.message = '%s: %s' % (self._login, err.message)
raise err
except Exception as err:
raise err.__class__('%s: %s' % (self._login, str(err)))
return func
class RegisteredCCHandler(CCHandler):
""" Basic handler for all registered clients.
"""
def __getitem__(self, name):
self.client.top()
return super(RegisteredCCHandler, self).__getitem__(name)
def on_disconnect(self, conn):
self.logger.info('Client %s disconnected', self.client.login)
self.client.shutdown()
#
# Tags registration handler functions:
#
def tags_register(self, name, ttl=None, value=None):
""" Register a new tag on the calling node.
:param name: name of the tag to register
:param ttl: ttl of the tag (or None if not applicable)
:param value: value to fill the tag (optionnal)
"""
self.client.tags_register(name, ttl, value)
def tags_unregister(self, name):
""" Unregister a tag on the calling node.
:param name: name of the tag to unregister
"""
self.client.tags_unregister(name)
def tags_drop(self, name):
""" Drop the tag value of the specified tag on the calling node.
:param name: name of the tag to drop
"""
self.client.tags_drop(name)
def tags_update(self, name, value, ttl=None):
""" Update the value of the specified tag on the calling node.
:param name: name of the tag to update
:param value: new tag value
:param ttl: new ttl value
"""
self.client.tags_update(name, value, ttl)
#
# Sub objects functions:
#
@listed
def register(self, obj_id, role):
""" Register an object managed by the calling node.
.. note:
the obj_id argument passed to this handler is the object id of the
registered object (not the fully qualified id, the server will
preprend the id by "node_id." itself).
:param obj_id: the id of the object to register
:param role: the role of the object to register
"""
self.client.register(obj_id, role)
@listed
def unregister(self, obj_id):
""" Unregister an object managed by the calling node.
.. note:
the obj_id argument passed to this handler is the object id of the
unregistered object (not the fully qualified id, the server will
preprend the id by "node_id." itself).
:param obj_id: the id of the object to unregister
"""
self.client.unregister(obj_id)
@listed
def sub_tags_register(self, obj_id, name, ttl=None, value=None):
""" Register a new remote tag for a child of the client.
:param obj_id: child name
:param name: name of the tag to register
:param ttl: TTL of the tag if applicable (None = no TTL, the tag will
never expire)
:param value: value of the tag
"""
self.client.sub_tags_register(obj_id, name, ttl, value)
@listed
def sub_tags_unregister(self, obj_id, name):
""" Unregister a remote tag for a child of the client.
:param obj_id: child name
:param name: name of the tag to unregister
"""
self.client.sub_tags_unregister(obj_id, name)
@listed
def sub_tags_drop(self, obj_id, name):
""" Drop the cached value of a remote tag for a child of the client.
:param obj_id: child name
:param name: name of the tag to drop
"""
self.client.sub_tags_drop(obj_id, name)
@listed
def sub_tags_update(self, obj_id, name, value=None, ttl=None):
""" Update a remote tag for a child of the client.
:param obj_id: child name
:param name: name of the tag to update
:param value: new value of the tag
:param ttl: new ttl of the tag
"""
self.client.sub_tags_update(obj_id, name, value, ttl)
class Client(object):
""" Base class for all types cc-server clients.
:param login: login of the client
:param server: server instance
:param connection: rpc connection to the client
"""
ROLE = None
RPC_HANDLER = RegisteredCCHandler
KILL_ALREADY_CONNECTED = False
roles = {}
def __init__(self, logger, login, server, connection):
self.logger = logger
self._login = login
self._server = server
self._connection = connection
self._handler = self.RPC_HANDLER(self)
self._proxy = CCServerConnectionProxy(login, self._connection.rpc)
self._last_action = datetime.now()
self._connection_date = datetime.now()
self._tql_object = None
self._children = {}
# Remote tags registered:
self._remote_tags = set()
def _get_tql_object(self):
""" Get the TQL object of the client from the cc-server tql database.
"""
return self._server.db.get(self.login)
@classmethod
def register_client_class(cls, class_):
""" Register a new client class.
"""
cls.roles[class_.ROLE] = class_
@classmethod
def from_role(cls, role, logger, login, server, connection):
return cls.roles[role](logger, login, server, connection)
#
# Properties
#
@property
def proxy(self):
""" Return a proxy to the rpc.
"""
return self._proxy
@property
def object(self):
""" Return the tql object of this client.
"""
return self._tql_object
@property
def login(self):
""" Return the login of this client.
"""
return self._login
@property
def role(self):
""" Return the role of this client.
"""
return self.ROLE
@property
def server(self):
""" Return the cc-server binded to this client.
"""
return self._server
@property
def conn(self):
""" Return the sjrpc connection to the client.
"""
return self._connection
@property
def uptime(self):
""" Get the uptime of the client connection in seconds.
:return: uptime of the client
"""
dt = datetime.now() - self._connection_date
return dt.seconds + dt.days * 86400
@property
def idle(self):
""" Get the idle time of the client connection in seconds.
:return: idle of the client
"""
dt = datetime.now() - self._last_action
return dt.seconds + dt.days * 86400
@property
def ip(self):
""" Get client remote ip address.
"""
peer = self.conn.getpeername()
return ':'.join(peer.split(':')[:-1])
def attach(self):
""" Attach the client to the server.
"""
# Set the role's handler for the client:
self.conn.rpc.set_handler(self._handler)
# Register the client in tql database:
self._tql_object = self._get_tql_object()
# Register the server defined client tags:
self._tql_object.register(CallbackTag('con', lambda: self.uptime, ttl=0))
self._tql_object.register(CallbackTag('idle', lambda: self.idle, ttl=0))
self._tql_object.register(CallbackTag('ip', lambda: self.ip))
def shutdown(self):
""" Shutdown the connection to the client.
"""
# Unregister all children:
for child in self._children.copy():
self.unregister(child)
# Disable the client handler:
self.conn.rpc.set_handler(None)
# Unregister all remote tags:
for tag in self._remote_tags.copy():
self.tags_unregister(tag)
# Unrefister all server defined client tags:
self._tql_object.unregister('con')
self._tql_object.unregister('idle')
self._tql_object.unregister('ip')
self._server.rpc.unregister(self.conn, shutdown=True)
self._server.unregister(self)
def top(self):
""" Reset the "last action" date to now.
"""
self._last_action = datetime.now()
def async_remote_tags(self, watcher, robj, tags):
""" Asynchronously update tags from the remote client using
specified watcher.
"""
watcher.register(self.conn, 'get_tags', tags, _data=(tags, robj))
def tags_register(self, name, ttl=None, value=None):
""" Register a new remote tag for the client.
:param name: name of the tag to register
:param ttl: TTL of the tag if applicable (None = no TTL, the tag will
never expire)
:param value: value of the tag
"""
tag = RemoteTag(name, self.async_remote_tags, ttl=ttl)
self._tql_object.register(tag)
self._remote_tags.add(name)
def tags_unregister(self, name):
"""
Unregister a remote tag for the client.
:param name: name of the tag to unregister
"""
self._tql_object.unregister(name)
self._remote_tags.discard(name)
def tags_drop(self, name):
""" Drop the cached value of a remote tag for the client.
:param name: name of the tag to drop
"""
tag = self._tql_object.get(name)
if tag is not None:
tag.invalidate()
def tags_update(self, name, value=None, ttl=None):
""" Update a remote tag.
:param name: name of the tag to update
:param value: new value of the tag
:param ttl: new ttl of the tag
"""
tag = self._tql_object.get(name)
if tag is not None:
if value is not None:
tag.cached = value
if ttl is not None:
tag.ttl = ttl
def list(self, query, show=None, method=None):
return self._server.list(query, show=show, method=method, requester=self.login)
def check(self, method, query=None):
if query is not None:
if not self._server.check(query, self.login, method):
raise RightError('You don\'t have right to do that')
else:
if not self._server.check_method(self.login, method):
raise RightError('You don\'t have right to do that')
def register(self, obj_id, role):
""" Register a new child.
"""
child = '%s.%s' % (self.login, obj_id)
# Register the children in the tags database:
obj = TqlObject(child)
obj.register(StaticTag('r', role))
obj.register(StaticTag('p', self.login))
self._server.db.register(obj)
self._children[obj_id] = obj
def unregister(self, obj_id):
""" Unregister a child.
"""
child = '%s.%s' % (self.login, obj_id)
del self._children[obj_id]
# Unregister the children from the tags database:
self._server.db.unregister(child)
def sub_tags_register(self, obj_id, name, ttl=None, value=None):
""" Register a new remote tag for a child of the client.
:param obj_id: child name
:param name: name of the tag to register
:param ttl: TTL of the tag if applicable (None = no TTL, the tag will
never expire)
:param value: value of the tag
"""
callback = partial(self.async_remote_sub_tags, obj_id)
tag = RemoteTag(name, callback, ttl=ttl)
self._children[obj_id].register(tag)
def sub_tags_unregister(self, obj_id, name):
""" Unregister a remote tag for a child of the client.
:param obj_id: child name
:param name: name of the tag to unregister
"""
self._children[obj_id].unregister(name)
def sub_tags_drop(self, obj_id, name):
""" Drop the cached value of a remote tag for a child of the client.
:param obj_id: child name
:param name: name of the tag to drop
"""
tag = self._children[obj_id].get(name)
if tag is not None:
tag.invalidate()
def sub_tags_update(self, obj_id, name, value=None, ttl=None):
""" Update a remote tag for a child of the client.
:param obj_id: child name
:param name: name of the tag to update
:param value: new value of the tag
:param ttl: new ttl of the tag
"""
tag = self._children[obj_id].get(name)
if tag is not None:
if value is not None:
tag.cached = value
if ttl is not None:
tag.ttl = ttl
def async_remote_sub_tags(self, obj_id, watcher, robj, tags):
""" Asynchronously update sub tags from the remote client using
specified watcher.
"""
watcher.register(self.conn, 'sub_tags', obj_id, tags, _data=(tags, robj)) cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/clients/bootstrap.py 0000664 0000000 0000000 00000003063 12545176055 0031020 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
from cloudcontrol.server.clients import Client
from cloudcontrol.server.clients.host import HostClient
from cloudcontrol.server.db import SObject
from cloudcontrol.common.tql.db.tag import StaticTag
class BootstrapClient(HostClient):
""" A bootstrap client connected to the cc-server.
"""
ROLE = 'bootstrap'
def _get_tql_object(self):
tql_object = SObject(self.login)
tql_object.register(StaticTag('r', self.role))
self._server.db.register(tql_object)
return tql_object
@property
def login(self):
return '%s.%s' % (self._login, self.conn.get_fd())
@property
def role(self):
return 'host'
def shutdown(self):
super(BootstrapClient, self).shutdown()
# Also, remote the object from the db:
self._server.db.unregister(self.login)
Client.register_client_class(BootstrapClient)
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/clients/cli.py 0000664 0000000 0000000 00000132625 12545176055 0027561 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
from sjrpc.core import RpcError
from cloudcontrol.common.datastructures.orderedset import OrderedSet
from cloudcontrol.common.allocation.vmspec import expand_vmspec
from cloudcontrol.server.conf import CCConf
from cloudcontrol.server.exceptions import (ReservedTagError, BadRoleError,
NotConnectedAccountError, CloneError)
from cloudcontrol.server.election import Elector
from cloudcontrol.server.repository import RepositoryOperationError
from cloudcontrol.server.handlers import listed, Reporter
from cloudcontrol.server.clients import Client, RegisteredCCHandler
from cloudcontrol.server.jobs import (ColdMigrationJob, HotMigrationJob,
CloneJob, KillClientJob, AllocationJob)
from cloudcontrol.common.tql.db.tag import StaticTag
MIGRATION_TYPES = {'cold': ColdMigrationJob,
'hot': HotMigrationJob,}
RESCUE_SCRIPT = 'rescue'
class CliHandler(RegisteredCCHandler):
""" Handler binded to the 'cli' role.
Summary of methods:
.. currentmodule:: cloudcontrol.server.clients.cli
.. autosummary::
CliHandler.list
CliHandler.wall
CliHandler.start
CliHandler.stop
CliHandler.destroy
CliHandler.pause
CliHandler.resume
CliHandler.disablevirtiocache
CliHandler.autostart
CliHandler.undefine
CliHandler.passwd
CliHandler.addaccount
CliHandler.copyaccount
CliHandler.addtag
CliHandler.deltag
CliHandler.tags
CliHandler.delaccount
CliHandler.close
CliHandler.declose
CliHandler.kill
CliHandler.loadrights
CliHandler.saverights
CliHandler.execute
CliHandler.shutdown
CliHandler.jobs
CliHandler.cancel
CliHandler.purge
CliHandler.attachment
CliHandler.loadscript
CliHandler.savescript
CliHandler.delscript
CliHandler.runscript
CliHandler.loadplugin
CliHandler.saveplugin
CliHandler.delplugin
CliHandler.installplugin
CliHandler.uninstallplugin
CliHandler.runplugin
CliHandler.electiontypes
CliHandler.election
CliHandler.migrate
CliHandler.clone
CliHandler.console
CliHandler.shell
CliHandler.resize
CliHandler.forward
CliHandler.dbstats
"""
@listed
def list(self, query, method='list'):
""" List all objects registered on this instance.
:param query: the query to select objects to show
"""
self.logger.debug('Executed list function with query %s', query)
objects = self.client.list(query, method=method)
order = OrderedSet(['id'])
#if tags is not None:
# order |= OrderedSet(tags)
return {'objects': objects, 'order': list(order)}
@listed
def wall(self, message):
""" Send a wall to all connected users.
"""
self.client.check('wall')
self.server.wall(self.client.login, message)
@listed
def loadmotd(self):
""" Load and return the message of the day.
"""
return self.server.load_motd()
@listed
def savemotd(self, motd):
""" Save a new message of the day.
"""
self.server.save_motd(motd)
#
# VM actions:
#
def _vm_action(self, query, method, *args, **kwargs):
""" Do an action on a virtual machine.
"""
errs = Reporter()
# Search all hypervisors of selected vms:
for vm in self.client.list(query, show=('r', 'h', 'p'),
method=method[3:]): # Strip the vm_ prefix
if vm['r'] != 'vm':
errs.error(vm['id'], 'not a vm')
else:
hvclient = self.server.get_client(vm['p'])
if hvclient is None:
errs.error(vm['id'], 'offline hypervisor')
else:
try:
hvclient.vm_action(method, vm['h'], *args, **kwargs)
except Exception as err:
errs.error(vm['id'], str(err))
else:
errs.success(vm['id'], 'ok')
return errs.get_dict()
@listed
def start(self, query, pause=False):
""" Start a virtual machine.
"""
return self._vm_action(query, 'vm_start', pause)
@listed
def stop(self, query):
""" Stop a virtual machine.
"""
return self._vm_action(query, 'vm_stop')
@listed
def destroy(self, query):
""" Destroy (hard shutdown) a virtual machine.
"""
return self._vm_action(query, 'vm_destroy')
@listed
def pause(self, query):
""" Pause a virtual machine.
"""
return self._vm_action(query, 'vm_suspend')
@listed
def resume(self, query):
""" Resume a virtual machine.
"""
return self._vm_action(query, 'vm_resume')
@listed
def disablevirtiocache(self, query):
""" Set virtio cache to none on VMs disk devices.
:param query: tql query
"""
return self._vm_action(query, 'vm_disable_virtio_cache')
@listed
def autostart(self, query, flag):
""" Set autostart flag on VMs.
:param query: tql query
:param bool flag: autostart value to set
"""
return self._vm_action(query, 'vm_set_autostart', flag)
@listed
def undefine(self, query, delete_storage=True):
""" Undefine selected virtual machines.
:param query: the tql query to select objects.
:param delete_storage: delete storage of vm.
:return: a dict where key is the id of a selected object, and the value
a tuple (errcode, message) where errcode is (success|error|warn) and
message an error message or the output of the command in case of
success.
"""
objects = self.client.list(query, show=('r', 'p', 'h', 'disk*',), method='undefine')
errs = Reporter()
for obj in objects:
if obj['r'] != 'vm':
errs.error(obj['id'], 'bad role')
continue
try:
hvcon = self.server.get_client(obj['p'])
except KeyError:
errs.error(obj['id'], 'hypervisor not connected')
else:
if delete_storage:
for disk in obj.get('disk', '').split():
pool = obj.get('disk%s_pool' % disk)
name = obj.get('disk%s_vol' % disk)
hvcon.proxy.vol_delete(pool, name)
hvcon.proxy.vm_undefine(obj['h'])
errs.success(obj['id'], 'vm undefined')
return errs.get_dict()
@listed
def rescue(self, query):
""" Enable rescue mode on selected virtual machines.
:param query: the tql query to select objects.
"""
objects = self.client.list(query, show=('r', 'p', 'h', 'status'), method='rescue')
if objects:
# Load the script:
sha1_hash, _ = self.server.scripts.load(RESCUE_SCRIPT)
errs = Reporter()
for obj in objects:
if obj['r'] != 'vm':
errs.error(obj['id'], 'bad role')
continue
elif obj['status'] != 'stopped':
errs.error(obj['id'], 'VM must be stopped')
continue
try:
hvcon = self.server.get_client(obj['p'])
except KeyError:
errs.error(obj['id'], 'hypervisor not connected')
else:
args = (None, obj['h'])
job_id = hvcon.script_run(sha1_hash, RESCUE_SCRIPT,
self.client.login, *args)
output = hvcon.job_poll(job_id)
errs.success(obj['id'], output)
return errs.get_dict()
@listed
def unrescue(self, query):
""" Disable rescue mode on selected virtual machines.
:param query: the tql query to select objects.
"""
objects = self.client.list(query, show=('r', 'p', 'h', 'status'), method='rescue')
if objects:
# Load the script:
sha1_hash, _ = self.server.scripts.load(RESCUE_SCRIPT)
errs = Reporter()
for obj in objects:
if obj['r'] != 'vm':
errs.error(obj['id'], 'bad role')
continue
elif obj['status'] != 'stopped':
errs.error(obj['id'], 'VM must be stopped')
continue
try:
hvcon = self.server.get_client(obj['p'])
except KeyError:
errs.error(obj['id'], 'hypervisor not connected')
else:
args = (None, '-u', obj['h'])
job_id = hvcon.script_run(sha1_hash, RESCUE_SCRIPT,
self.client.login, *args)
output = hvcon.job_poll(job_id)
errs.success(obj['id'], output)
return errs.get_dict()
#
# Account management:
#
@listed
def passwd(self, query, password, method='ssha'):
""" Define a new password for selected users.
:param query: the query to select the objects to change
:param password: the password to set (None to remove password)
:param method: the hash method (sha, ssha, md5, smd5 or plain)
:return: a standard report output
"""
objects = self.client.list(query, show=('a',), method='passwd')
errs = Reporter()
with self.conf:
for obj in objects:
if 'a' not in obj:
errs.error(obj['id'], 'not an account')
continue
self.conf.set_password(obj['a'], password, method)
errs.success(obj['id'], 'password updated')
return errs.get_dict()
@listed
def addaccount(self, login, role, password=None):
""" Create a new account with specified login.
:param login: the login of the new account
:param role: the role of the new account
:param password: the password of the new account (None = not set)
"""
self.client.check('addaccount')
if role in Client.roles:
self.conf.create_account(login, role, password)
else:
raise BadRoleError('%r is not a legal role.' % role)
@listed
def copyaccount(self, copy_login, login, password=None):
""" Create a new account with specified login.
:param copy_login: the login of the account to copy
:param login: the login of the new account
:param password: the password of the new account (default None)
"""
self.client.check('addaccount')
self.conf.copy_account(copy_login, login, password)
@listed
def addtag(self, query, tag_name, tag_value):
""" Add a tag to the accounts which match the specified query.
:param query: the query to select objects
:param tag_name: the name of the tag to add
:param tag_value: the value of the tag
"""
if tag_name in self.server.RESERVED_TAGS:
raise ReservedTagError('Tag %r is read-only' % tag_name)
objects = self.client.list(query, show=('a', 'p', 'h'), method='addtag')
errs = Reporter()
with self.conf:
for obj in objects:
if 'a' in obj:
# Update the configuration for this account:
tags = self.conf.show(obj['a'])['tags']
if tag_name in tags:
errs.warn(obj['id'], 'tag already exists, changed from %s'
' to %s' % (tags[tag_name], tag_value))
# Update the object db (update the tag value):
dbobj = self.server.db.get(obj['id'])
dbobj[tag_name].value = tag_value
else:
errs.success(obj['id'], 'tag created')
# Update the object db (create the tag):
dbobj = self.server.db.get(obj['id'])
dbobj.register(StaticTag(tag_name, tag_value), override=True)
self.conf.add_tag(obj['a'], tag_name, tag_value)
elif 'p' in obj:
try:
hvcon = self.server.get_client(obj['p'])
except KeyError:
errs.error(obj['id'], 'hypervisor not connected')
else:
try:
hvcon.proxy.tag_add(obj['h'], tag_name, tag_value)
except NameError:
errs.error(obj['id'], 'hypervisor does not handle tags on VMs')
except Exception, err:
errs.error(obj['id'], str(err))
else:
errs.success(obj['id'], 'tag set on VM')
else:
errs.error(obj['id'], 'this object cannot have static tags')
continue
return errs.get_dict()
@listed
def deltag(self, query, tag_name):
""" Remove a tag of the selected accounts.
:param query: the query to select objects
:param tag_name: the name of the tag to remove
"""
if tag_name in self.server.RESERVED_TAGS:
raise ReservedTagError('Tag %r is read-only' % tag_name)
objects = self.client.list(query, show=('a', 'p', 'h'), method='deltag')
errs = Reporter()
with self.conf:
for obj in objects:
if 'a' in obj:
tags = self.conf.show(obj['a'])['tags']
if tag_name in tags:
errs.success(obj['id'], 'tag deleted')
dbobj = self.server.db.get(obj['id'])
dbobj.unregister(tag_name, override=True)
else:
errs.warn(obj['id'], 'unknown tag')
self.server.conf.remove_tag(obj['a'], tag_name)
elif 'p' in obj:
try:
hvcon = self.server.get_client(obj['p'])
except KeyError:
errs.error(obj['id'], 'hypervisor not connected')
else:
try:
hvcon.proxy.tag_delete(obj['h'], tag_name)
except NameError:
errs.error(obj['id'], 'hypervisor does not handle tags on VMs')
except Exception, err:
errs.error(obj['id'], str(err))
else:
errs.success(obj['id'], 'tag deleted on VM')
else:
errs.error(obj['id'], 'this object cannot have static tags')
continue
return errs.get_dict()
@listed
def tags(self, query):
""" Return all static tags attached to the selected accounts.
:param query: the query to select objects
"""
objects = self.client.list(query, show=('a', 'p', 'h'), method='tags')
tags = []
for obj in objects:
o = {'id': obj['id']}
if 'a' in obj:
otags = self.server.conf.show(obj['a'])['tags']
otags.update({'id': obj['id']})
o.update(otags)
elif 'p' in obj:
try:
hvcon = self.server.get_client(obj['p'])
except KeyError:
pass
else:
try:
otags = hvcon.proxy.tag_show(obj['h'])
except:
self.logger.exception('Unable to get tags, node error')
else:
otags.update({'id': obj['id']})
o.update(otags)
tags.append(o)
return {'objects': tags, 'order': ['id']}
@listed
def delaccount(self, query):
""" Delete the accounts selected by query.
:param query: the query to select objects
"""
objects = self.client.list(query, show=('a',), method='delaccount')
errs = Reporter()
with self.server.conf:
for obj in objects:
if 'a' not in obj:
errs.error(obj['id'], 'not an account')
continue
try:
self.server.conf.remove_account(obj['a'])
except CCConf.UnknownAccount:
errs.error(obj['id'], 'unknown account')
else:
errs.success(obj['id'], 'account deleted')
self.server.jobs.spawn(KillClientJob, self.client.login,
settings={'server': self.server,
'account': obj['a'],
'gracetime': 1})
return errs.get_dict()
@listed
def close(self, query):
""" Close selected account an account without deleting them.
:param query: the query to select objects
"""
objects = self.client.list(query, show=('a',), method='close')
errs = Reporter()
with self.server.conf:
for obj in objects:
if 'a' not in obj:
errs.error(obj['id'], 'not an account')
continue
tags = self.server.conf.show(obj['a'])['tags']
if 'close' in tags:
errs.warn(obj['id'], 'account already closed')
continue
errs.success(obj['id'], 'closed')
self.server.conf.add_tag(obj['a'], 'close', 'yes')
dbobj = self.server.db.get(obj['id'])
dbobj.register(StaticTag('close', 'yes'), override=True)
self.server.jobs.spawn(KillClientJob, self.client.login,
settings={'server': self.server,
'account': obj['a'],
'gracetime': 1})
return errs.get_dict()
@listed
def declose(self, query):
""" Re-open selected closed accounts.
:param query: the query to select objects
"""
objects = self.client.list(query, show=('a',), method='declose')
errs = Reporter()
with self.server.conf:
for obj in objects:
if 'a' not in obj:
errs.error(obj['id'], 'not an account')
continue
tags = self.conf.show(obj['a'])['tags']
if 'close' in tags:
errs.success(obj['id'], 'account declosed')
self.conf.remove_tag(obj['a'], 'close')
dbobj = self.server.db.get(obj['id'])
dbobj.unregister('close', override=True)
else:
errs.warn(obj['id'], 'account not closed')
return errs.get_dict()
@listed
def kill(self, query):
""" Disconnect all connected accounts selected by query.
:param query: the query to select objects
"""
objects = self.client.list(query, show=set(('a',)), method='kill')
errs = Reporter()
with self.server.conf:
for obj in objects:
if 'a' not in obj:
errs.error(obj['id'], 'not an account')
continue
try:
self.server.kill(obj['a'])
except NotConnectedAccountError:
errs.error(obj['id'], 'account is not connected')
else:
errs.success(obj['id'], 'account killed')
return errs.get_dict()
#
# Rights management:
#
@listed
def loadrights(self):
""" Get the current ruleset.
"""
self.client.check('rights')
return self.server.rights.export()
@listed
def saverights(self, ruleset):
""" Set the current ruleset.
:param ruleset: the ruleset to load.
"""
self.client.check('rights')
self.server.rights.load(ruleset)
#
# Nodes actions:
#
@listed
def execute(self, query, command):
""" Execute command on matched objects (must be roles hv or host).
:param query: the tql query to select objects.
:param command: the command to execute on each object
:return: a dict where key is the id of a selected object, and the value
a tuple (errcode, message) where errcode is (success|error|warn) and
message an error message or the output of the command in case of
success.
"""
objects = self.client.list(query, show=('r',), method='execute')
errs = Reporter()
for obj in objects:
if obj['r'] not in ('hv', 'host'):
errs.error(obj['id'], 'bad role')
continue
try:
client = self.server.get_client(obj['id'])
except KeyError:
errs.error(obj['id'], 'node not connected')
else:
try:
returned = client.execute(command)
except Exception as err:
errs.error(obj['id'], str(err))
else:
errs.success(obj['id'], 'command executed', output=returned)
return errs.get_dict()
@listed
def shutdown(self, query, reboot=True, gracefull=True):
""" Execute a shutdown on selected objects (must be roles hv or host).
:param query: the tql query to select objects.
:param reboot: reboot the host instead of just shut it off
:param gracefull: properly shutdown the host
:return: a dict where key is the id of a selected object, and the value
a tuple (errcode, message) where errcode is (success|error|warn) and
message an error message.
"""
objects = self.client.list(query, show=set(('r',)), method='execute')
errs = Reporter()
for obj in objects:
if obj['r'] not in ('hv', 'host'):
errs.error(obj['id'], 'bad role')
continue
try:
node = self.server.get_client(obj['id'])
except KeyError:
errs.error(obj['id'], 'node not connected')
else:
try:
node.shutdown_node(reboot, gracefull)
except RpcError as err:
errs.error(obj['id'], '%s (exc: %s)' % (err.message,
err.exception))
else:
errs.success(obj['id'], 'ok')
return errs.get_dict()
#
# Jobs management:
#
@listed
def cancel(self, query):
""" Cancel a job.
:param query: the tql query used to select jobs to cancel
"""
objects = self.client.list(query, show=('r', 'p'), method='cancel')
errs = Reporter()
for obj in objects:
if obj['r'] != 'job':
errs.error(obj['id'], 'not a job')
elif 'p' in obj:
sub_id = obj['id'][len(obj['p']) + 1:]
client = self.server.get_client(obj['p'])
try:
client.proxy.job_cancel(sub_id)
except Exception as err:
errs.error(obj['id'], '%s' % err)
else:
errs.success(obj['id'], 'job cancelled')
else:
try:
self.server.jobs.get(obj['id']).cancel()
except KeyError:
errs.error(obj['id'], 'unknown job')
else:
errs.success(obj['id'], 'job cancelled')
return errs.get_dict()
@listed
def purge(self, query):
""" Purge matching jobs from server.
:param query: the tql query used to select jobs to purge
.. note::
Purge only work for job with state done.
"""
objects = self.client.list(query, show=('r', 'p', 'state'), method='purge')
errs = Reporter()
for obj in objects:
if obj['r'] != 'job':
errs.error(obj['id'], 'not a job')
elif obj['state'] != 'done':
errs.error(obj['id'], 'job must be done')
elif 'p' in obj:
sub_id = obj['id'][len(obj['p']) + 1:]
client = self.server.get_client(obj['p'])
try:
client.proxy.job_purge(sub_id)
except Exception as err:
errs.error(obj['id'], '%s' % err)
else:
errs.success(obj['id'], 'job purged')
else:
try:
self.server.jobs.purge(obj['id'])
except KeyError:
raise
errs.warn(obj['id'], 'job already purged')
else:
errs.success(obj['id'], 'job purged')
return errs.get_dict()
@listed
def attachment(self, query, name):
""" Get the specified attachment for jobs matching query.
:param query: the tql query used to select jobs
"""
objects = self.client.list(query, show=('r', 'p'), method='attachment')
errs = Reporter()
for obj in objects:
if obj['r'] != 'job':
errs.error(obj['id'], 'not a job')
elif 'p' in obj:
sub_id = obj['id'][len(obj['p']) + 1:]
client = self.server.get_client(obj['p'])
try:
output = client.proxy.job_attachment(sub_id, name)
except Exception as err:
errs.error(obj['id'], '%s' % err)
else:
errs.success(obj['id'], 'ok', output=output)
else:
job = self.server.jobs.get(obj['id'])
if job is None:
errs.error(obj['id'], 'unknown job')
else:
attachment = job.read_attachment(name)
errs.success(obj['id'], 'ok', output=attachment)
return errs.get_dict()
#
# Script management:
#
@listed
def loadscript(self, name):
""" Get the content of a script stored ont he server.
:param name: name of the script to load
"""
self.client.check('script')
return self.server.scripts.load(name, empty_if_missing=True)[1]
@listed
def savescript(self, name, content):
""" Save a script on the server.
:param name: name of the script to save
:param content: content of the script to save
"""
self.client.check('script')
return self.server.scripts.save(name, content)
@listed
def delscript(self, query):
""" Delete a script on the server.
:param query: the tql query used to select scripts to delete
"""
objects = self.client.list(query, show=('r', 'name'), method='script')
errs = Reporter()
for obj in objects:
if obj['r'] != 'script':
errs.error(obj['id'], 'not a script')
else:
try:
self.server.scripts.delete(obj['name'])
except RepositoryOperationError as err:
errs.error(obj['id'], 'error: %s' % err)
else:
errs.success(obj['id'], 'script deleted')
return errs.get_dict()
@listed
def runscript(self, query, script, batch=None, *args):
""" Execute a script on matching nodes.
"""
# Load the script:
sha1_hash, _ = self.server.scripts.load(script)
objects = self.client.list(query, show=('r', ), method='script')
errs = Reporter()
for obj in objects:
if obj['r'] not in ('host', 'hv'):
errs.error(obj['id'], 'not a host')
else:
try:
node = self.server.get_client(obj['id'])
except KeyError:
errs.error(obj['id'], 'node not connected')
continue
try:
job_id = node.script_run(sha1_hash, script,
self.client.login, batch=batch,
*args)
except RpcError as err:
errs.error(obj['id'], '%s (exc: %s)' % (err.message,
err.exception))
else:
job_id = '.'.join((obj['id'], job_id))
errs.success(obj['id'], 'ok.', jobs=job_id)
return errs.get_dict()
#
# Plugins management:
#
@listed
def loadplugin(self, name):
""" Get the content of a plugin stored ont he server.
:param name: name of the plugin to load
"""
self.client.check('plugin')
return self.server.plugins.load(name, empty_if_missing=True)[1]
@listed
def saveplugin(self, name, content):
""" Save a plugin on the server.
:param name: name of the plugin to save
:param content: content of the plugin to save
"""
self.client.check('plugin')
return self.server.plugins.save(name, content)
@listed
def delplugin(self, query):
""" Delete a plugin on the server.
:param query: the tql query used to select plugins to delete
"""
objects = self.client.list(query, show=('r', 'name'), method='plugin')
errs = Reporter()
for obj in objects:
if obj['r'] != 'plugin':
errs.error(obj['id'], 'not a plugin')
else:
try:
self.server.plugins.delete(obj['name'])
except RepositoryOperationError as err:
errs.error(obj['id'], 'error: %s' % err)
else:
errs.success(obj['id'], 'plugin deleted')
return errs.get_dict()
@listed
def installplugin(self, query, plugin):
""" Install a plugin on matching nodes.
"""
# Load the plugin:
sha1_hash, _ = self.server.plugins.load(plugin)
objects = self.client.list(query, show=('r', ), method='plugin')
errs = Reporter()
for obj in objects:
if obj['r'] not in ('host', 'hv'):
errs.error(obj['id'], 'not a host')
else:
try:
node = self.server.get_client(obj['id'])
except KeyError:
errs.error(obj['id'], 'node not connected')
continue
try:
node.plugin_install(sha1_hash, plugin)
except RpcError as err:
errs.error(obj['id'], '%s (exc: %s)' % (err.message,
err.exception))
else:
errs.success(obj['id'], 'plugin installed')
return errs.get_dict()
@listed
def uninstallplugin(self, query, plugin):
""" Install a plugin on matching nodes.
"""
objects = self.client.list(query, show=('r', ), method='plugin')
errs = Reporter()
for obj in objects:
if obj['r'] not in ('host', 'hv'):
errs.error(obj['id'], 'not a host')
else:
try:
node = self.server.get_client(obj['id'])
except KeyError:
errs.error(obj['id'], 'node not connected')
continue
try:
node.plugin_uninstall(plugin)
except RpcError as err:
errs.error(obj['id'], '%s (exc: %s)' % (err.message,
err.exception))
else:
errs.success(obj['id'], 'plugin uninstalled')
return errs.get_dict()
@listed
def helpplugin(self, query, plugin, method):
""" Get help about a plugin installed on the matching host.
"""
objects = self.client.list(query, show=('r', ), method='plugin')
errs = Reporter()
for obj in objects:
if obj['r'] not in ('host', 'hw'):
errs.error(obj['id'], 'not a host')
else:
try:
node = self.server.get_client(obj['id'])
except KeyError:
errs.error(obj['id'], 'node not connected')
continue
try:
help = node.plugin_help(plugin, method)
except RpcError as err:
errs.error(obj['id'], '%s (exc: %s)' % (err.message,
err.exception))
else:
errs.success(obj['id'], 'ok.', output=help)
return errs.get_dict()
@listed
def runplugin(self, query, plugin, method, batch=None, **kwargs):
""" Execute a plugin method on matching nodes.
"""
# Load the plugin:
sha1_hash, _ = self.server.plugins.load(plugin)
objects = self.client.list(query, show=('r', ), method='plugin')
errs = Reporter()
for obj in objects:
if obj['r'] not in ('host', 'hv'):
errs.error(obj['id'], 'not a host')
else:
try:
node = self.server.get_client(obj['id'])
except KeyError:
errs.error(obj['id'], 'node not connected')
continue
try:
job_id = node.plugin_run(plugin, method, self.client.login,
batch=batch, **kwargs)
except RpcError as err:
errs.error(obj['id'], '%s (exc: %s)' % (err.message,
err.exception))
else:
job_id = '.'.join((obj['id'], job_id))
errs.success(obj['id'], 'ok.', jobs=job_id)
return errs.get_dict()
#
# Election / Migration / Cloning / Allocation
#
@listed
def electiontypes(self):
return Elector.ALGO_BY_TYPES
@listed
def election(self, query_vm, query_dest, mtype='cold', algo='fair', **kwargs):
""" Consult the server for the migration of specified vm on
an hypervisor pool.
:param query_vm: the tql query to select VMs to migrate
:param query_dest: the tql query to select destination hypervisors
candidates
:param mtype: type of migration
:param algo: algo used for distribution
"""
elector = Elector(self.server, query_vm, query_dest, self.client)
return elector.election(mtype, algo, **kwargs)
@listed
def migrate(self, migration_plan):
""" Launch the provided migration plan.
:param migration_plan: the plan of the migration.
:return: a standard error report
"""
errs = Reporter()
for migration in migration_plan:
# Check if the migration type is know:
if migration['type'] in MIGRATION_TYPES:
mtype = MIGRATION_TYPES[migration['type']]
else:
errmsg = '%r unknown migration type' % migration['type']
errs.error(migration['sid'], errmsg)
continue
vm = self.server.db.get_by_id(migration['sid'], ('h', 'hv', 'p'))
# Construct the migration properties:
migration_properties = {
'client': self.client,
'server': self.server,
'vm_name': vm['h'],
'hv_source': vm['p'],
'hv_dest': migration['did']
}
# Create the job:
job = self.client.spawn_job(mtype, settings=migration_properties)
errs.success(migration['sid'], 'migration launched, id:%s' % job.id)
return errs.get_dict()
@listed
def clone(self, tql_vm, tql_dest, name):
""" Create and launch a clone job.
:param tql_vm: a tql matching one vm object (the cloned vm)
:param tql_dest: a tql matching one hypervisor object (the destination
hypervisor)
:param name: the new name of the VM
"""
vm = self.client.list(tql_vm, show=('r', 'h', 'p'), method='clone')
if len(vm) != 1:
raise CloneError('VM Tql must select ONE vm')
elif vm[0]['r'] != 'vm':
raise CloneError('Destination Tql must select a vm')
else:
vm = vm[0]
dest = self.client.list(tql_dest, show=('r',), method='clone')
if len(dest) != 1:
raise CloneError('Destination Tql must select ONE hypervisor')
elif dest[0]['r'] != 'hv':
raise CloneError('Destination Tql must select an hypervisor')
else:
dest = dest[0]
job = self.client.spawn_job(CloneJob, settings={'server': self.server,
'client': self.client,
'vm_name': vm['h'],
'new_vm_name': name,
'hv_source': vm['p'],
'hv_dest': dest['id']})
return job.id
@listed
def allocate(self, vmspec, tql_target):
""" Allocate new VMs.
:param vmspec: a vmspec structure
:param tql_target: a TQL used as target restriction
"""
self.client.check('allocate')
# Check and expand vmspec input:
expanded_vmspec = expand_vmspec(vmspec)
job = self.client.spawn_job(AllocationJob, settings={'server': self.server,
'client': self.client,
'expanded_vmspec': expanded_vmspec,
'tql_target': tql_target})
return job.id
#
# Remote console:
#
@listed
def console(self, tql):
""" Start a remote console on object matching the provided tql.
:param tql: tql matching only one object on which start the console
:return: the label of the created tunnel
"""
objects = self.client.list(tql, show=('r', 'p', 'h'), method='console')
if not objects:
raise NotImplementedError('No objects matched by query')
elif len(objects) != 1:
raise NotImplementedError('Console only support one tunnel at time for now')
errs = Reporter()
for obj in objects:
if obj['r'] in ('vm',):
client = self.server.get_client(obj['p'])
try:
srv_to_host_tun = client.console(obj['h'])
except Exception as err:
errs.error(obj['id'], str(err))
else:
cli_tun = self.client.register_tunnel('console', client, srv_to_host_tun)
errs.success(obj['id'], 'tunnel started.', output=cli_tun.label)
else:
errs.error(obj['id'], 'bad role')
return errs.get_dict()
#
# Remote shell:
#
@listed
def shell(self, tql):
""" Start a remote shell on object matching the provided tql.
:param tql: tql matching only one object on which start the shell
:return: the label of the created tunnel
"""
objects = self.client.list(tql, show=('r', 'p'), method='shell')
if not objects:
raise NotImplementedError('No objects matched by query')
elif len(objects) != 1:
raise NotImplementedError('Shell only support one tunnel at time for now')
errs = Reporter()
for obj in objects:
if obj['r'] in ('host', 'hv'):
client = self.server.get_client(obj['id'])
srv_to_host_tun = client.shell()
cli_tun = self.client.register_tunnel('shell', client, srv_to_host_tun)
errs.success(obj['id'], 'tunnel started.', output=cli_tun.label)
else:
errs.error(obj['id'], 'bad role')
return errs.get_dict()
@listed
def resize(self, label, row, col, xpixel, ypixel):
""" Send a resize event to the remote shell's tty.
:param label: label of the tty tunnel to resize
:param row: number of rows
:param col: number of columns
:param xpixel: unused
:param ypixel: unused
"""
if label is None:
tuns = [(c, st.label) for t, c, ct, st
in self.client.tunnels.values() if t == 'shell']
else:
ttype, client, ctun, stun = self.client.get_tunnel(label)
if ttype != 'shell':
raise ValueError('Label does not refers on a shell')
tuns = [(client, stun.label)]
for client, label in tuns:
client.resize(label, row, col, xpixel, ypixel)
#
# Port forwarding:
#
@listed
def forward(self, login, port, destination='127.0.0.1'):
""" Forward a TCP port to the client.
:param login: login of the remote client on which establish the tunnel
:param port: port on which establish the tunnel on destination
:param destination: tunnel destination (from the remote client side)
"""
self.client.check('forward', query='id=%s' % login)
# Create the tunnel to the node:
try:
host_client = self.server.get_client(login)
except KeyError:
raise KeyError('Specified client is not connected')
s2n_tun = host_client.forward(port, destination)
# Create tunnel to the CLI
c2s_tun = self.client.register_tunnel('forward', host_client, s2n_tun)
return c2s_tun.label
#
# Debug:
#
@listed
def dbstats(self):
""" Get statistics about tql database.
"""
return self.server.db.stats()
def forward_call(self, login, func, *args, **kwargs):
""" Forward a call to a connected client and return result.
:param login: login of the connected client
:param func: function to execute on the client
:param \*args, \*\*kwargs: arguments of the call
"""
self.client.check('forward_call')
client = self.server.get_client(login)
return client.conn.call(func, *args, **kwargs)
class CliClient(Client):
""" A cli client connected to the cc-server.
"""
ROLE = 'cli'
RPC_HANDLER = CliHandler
KILL_ALREADY_CONNECTED = True
def __init__(self, *args, **kwargs):
super(CliClient, self).__init__(*args, **kwargs)
self._tunnels = {} # Running tunnels for this client (as client)
@property
def tunnels(self):
""" Get active client tunnels by label (a dict).
"""
return self._tunnels
def spawn_job(self, job_class, **kwargs):
return self._server.jobs.spawn(job_class, self.login, **kwargs)
def register_tunnel(self, ttype, client, tun, label=None):
""" Create and register a tunnel for this client.
:param ttype: type of tunnel
:param client: client where the tunnel go
:param tun: the tunnel of this client
:param label: label of the tunnel to create
"""
def cb_on_shutdown(tun):
# Call the default callback:
tun.cb_default_on_shutdown(tun)
# Delete the tunnel from the current running tunnels:
self.unregister_tunnel(tun.label)
ctun = self.conn.create_tunnel(label=label, endpoint=tun.socket,
on_shutdown=cb_on_shutdown)
self._tunnels[ctun.label] = (ttype, client, ctun, tun)
return ctun
def get_tunnel(self, label):
""" Get the tunnel binded to the provided label.
:return: a tuple (type, remote_client, tunnel, remote_client_tunnel)
where: **type** is a string provided on tunnel creation,
**remote_client** the client object of the remote client on which
the tunnel is established, **tunnel** the cli-to-server tunnel
object from the sjRpc, **remote_client_tunnel** the
server-to-remote-client tunnel object from the sjRpc.
"""
return self._tunnels[label]
def unregister_tunnel(self, label):
try:
del self._tunnels[label]
except KeyError:
pass
def wall(self, sender, message):
""" Send a wall to the client.
"""
self.conn.call('wall', sender, message)
Client.register_client_class(CliClient)
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/clients/host.py 0000664 0000000 0000000 00000006625 12545176055 0027767 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
from cloudcontrol.server.clients import Client, RegisteredCCHandler
class HostHandler(RegisteredCCHandler):
""" Handler binded to an host client.
"""
def script_get(self, name):
""" Get a script by its name.
:param name: name of the script to get
"""
return self.server.scripts.load(name)
def plugin_get(self, name):
""" Get a plugin by its name.
:param name: name of the plugin to get
"""
return self.server.plugins.load(name)
class HostClient(Client):
""" A host client connected to the cc-server.
"""
ROLE = 'host'
RPC_HANDLER = HostHandler
def __init__(self, *args, **kwargs):
super(HostClient, self).__init__(*args, **kwargs)
def execute(self, command):
return self.conn.call('execute_command', command)
def shutdown_node(self, reboot=False, gracefull=True):
""" Shutdown the remote node.
"""
return self.conn.call('node_shutdown', reboot, gracefull)
def console(self, name):
""" Start a remote console on the specified vm.
"""
label = self.proxy.vm_open_console(name)
tun = self.conn.create_tunnel(label=label)
return tun
def shell(self):
""" Start a remote shell on the host.
"""
label = self.proxy.shell()
tun = self.conn.create_tunnel(label=label)
return tun
def resize(self, label, *args, **kwargs):
return self.proxy.resize(label, *args, **kwargs)
def forward(self, port, destination='127.0.0.1'):
""" Create a forwarding tunnel on this client and return it.
"""
tun = self.conn.create_tunnel()
self.proxy.forward(tun.label, port, destination)
return tun
def script_run(self, sha1_hash, script, owner, batch=None, *args):
""" Run a script on the host.
"""
return self.conn.call('script_run', sha1_hash, script, owner, batch, *args)
def plugin_help(self, plugin, method):
""" Get help about a plugin installed on matching host.
"""
return self.conn.call('plugin_help', plugin, method)
def plugin_run(self, plugin, method, owner, batch=None, **kwargs):
""" Run a plugin method on the host.
"""
return self.conn.call('plugin_run', plugin, method, owner,
batch=batch, **kwargs)
def plugin_install(self, sha1_hash, plugin):
""" Install a plugin on the host.
"""
return self.conn.call('plugin_install', sha1_hash, plugin)
def plugin_uninstall(self, plugin):
""" Uninstall a plugin on the host.
"""
return self.conn.call('plugin_uninstall', plugin)
Client.register_client_class(HostClient)
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/clients/hv.py 0000664 0000000 0000000 00000003451 12545176055 0027421 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
import threading
from cloudcontrol.server.clients import Client
from cloudcontrol.server.clients.host import HostClient
class HypervisorHandler(HostClient.RPC_HANDLER):
""" Handler binded to an hv client.
"""
class HvClient(HostClient):
""" A hv client connected to the cc-server.
"""
ROLE = 'hv'
RPC_HANDLER = HypervisorHandler
def __init__(self, *args, **kwargs):
super(HvClient, self).__init__(*args, **kwargs)
self._hv_lock = threading.RLock()
@property
def hvlock(self):
""" The lock used on hypervirsors for migration.
"""
return self._hv_lock
#
# Children specific methods:
#
def execute(self, command):
return self.conn.call('execute_command', command)
def get_child_remote_tags(self, obj_id, tag):
return self.conn.call('sub_tags', obj_id, (tag,))[tag]
def vm_action(self, action, vms, *args, **kwargs):
return self.conn.call(action, vms, *args, **kwargs)
def define(self, data, format):
return self.conn.call('vm_define', data, format)
Client.register_client_class(HvClient)
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/clients/spv.py 0000664 0000000 0000000 00000002562 12545176055 0027616 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
from cloudcontrol.server.clients import Client, RegisteredCCHandler
from cloudcontrol.server.handlers import listed
class SpvHandler(RegisteredCCHandler):
""" Handler binded to 'spv' role.
"""
@listed
def list(self, query):
""" List all objects registered on this instance.
:param query: the query to select objects to show
"""
self.logger.debug('Executed list function with query %s', query)
objects = self.server.list(query)
return {'objects': objects}
class SpvClient(Client):
""" A spv client connected to the cc-server.
"""
ROLE = 'spv'
RPC_HANDLER = SpvHandler
Client.register_client_class(SpvClient)
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/conf.py 0000664 0000000 0000000 00000032405 12545176055 0026271 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" This module provide an abstraction to the clients configuration directory.
The client configuration directory contains a list of ``.json`` files, each
file contains the configuration for one client. The username of the client is
the filename (excluding the extension).
The schema of the json file is described below::
{
'password': '',
'role': '',
'tags': {'tag1': 'value'},
'perms': Null
}
Usage example:
>>> conf = CCConf('/etc/cloudcontrol/clients')
>>> conf.create_account(login='rms', password='secret', role='cli')
>>> conf.create_account(login='server42', password='secret', role='node')
>>> print conf.authentify('server42', 'pouet')
None
>>> print conf.authentify('server42', 'secret')
u'node'
>>> conf.add_tag('rms', 'admin')
>>> conf.show('rms')
{'password': 'secret'
'role': 'cli',
'tags': {},
'perms': None}
>>> conf.remove_account('rms')
>>>
"""
import hashlib
import base64
import random
import threading
import json
import os
import re
from functools import wraps
def writer(func):
""" Decorator used to threadsafize methods that made write operations on
client configuration tree.
"""
@wraps(func)
def f(self, *args, **kwargs):
with self._lock:
return func(self, *args, **kwargs)
return f
class CCConf(object):
""" Create a new configuration interface.
:param path_directory: the directory to store the configuration files
"""
CONF_TEMPLATE = {'password': None,
'role': None,
'tags': {},
'rights': []}
RE_SALTPW = re.compile(r'{(?P[A-Z]+)}(?P.+)')
def __init__(self, logger, path_directory):
self.logger = logger
self._path = path_directory
self._lock = threading.RLock()
def __enter__(self):
return self._lock.__enter__()
def __exit__(self, *args, **kwargs):
return self._lock.__exit__(*args, **kwargs)
def _get_conf(self, login):
"""
Return the configuration of a client by its login.
:param login: login of the client
:return: the configuration of the client
:raise CCConf.UnknownAccount: if user login is unknown
"""
filename = os.path.join(self._path, '%s.json' % login)
if os.path.isfile(filename):
conf = json.load(open(filename, 'r'))
self.logger.debug('Getting configuration %s: %s', filename, conf)
return conf
else:
raise CCConf.UnknownAccount('%s is not a file' % filename)
def _set_conf(self, login, conf, create=False):
""" Update the configuration of a client by its login.
:param login: login of the client
:param conf: configuration to set for the client
:raise CCConf.UnknownAccount: if user login is unknown
"""
filename = os.path.join(self._path, '%s.json' % login)
self.logger.debug('Writing configuration %s: %s', filename, conf)
if os.path.isfile(filename) ^ create:
json.dump(conf, open(filename, 'w'))
else:
raise CCConf.UnknownAccount('%s is not a file' % filename)
def acquire(self):
""" Acquire the configuration writing lock for non-atomic configuration
changes.
.. warning::
Don't forget to call the :meth:`release` method after your changes
for each :meth:`acquire` you made.
"""
self._lock.acquire()
def release(self):
""" Release the configuration writing lock.
"""
self._lock.release()
def show(self, login):
""" Show the configuration for specified account.
:param login: the login of the client
:return: configuration of user
"""
return self._get_conf(login)
def _unsaltify(self, string, digest_size):
string = base64.decodestring(string)
password = string[0:digest_size]
salt = string[digest_size:]
return password, salt
def _get_randsalt(self, size=10):
salt = ''
for _ in xrange(size):
salt += chr(random.randint(1, 255))
return salt
def _auth_ssha(self, provided_passwd, configured_passwd=None):
if configured_passwd is not None:
salt = self._unsaltify(configured_passwd, 20)[1]
else:
salt = self._get_randsalt()
digest = hashlib.sha1(str(provided_passwd) + salt).digest()
return '{SSHA}%s' % base64.b64encode(digest + salt)
def _auth_sha(self, provided_passwd, configured_passwd=None):
digest = hashlib.sha1(str(provided_passwd)).digest()
return '{SHA}%s' % base64.b64encode(digest)
def _auth_smd5(self, provided_passwd, configured_passwd=None):
if configured_passwd is not None:
salt = self._unsaltify(configured_passwd, 16)[1]
else:
salt = self._get_randsalt()
digest = hashlib.sha1(str(provided_passwd) + salt).digest()
return '{SMD5}%s' % base64.b64encode(digest + salt)
def _auth_md5(self, provided_passwd, configured_passwd=None):
digest = hashlib.sha1(str(provided_passwd)).digest()
return '{MD5}%s' % base64.b64encode(digest)
def _auth_plain(self, provided_passwd, configured_passwd=None):
return provided_passwd
def _hash_password(self, password, method='ssha'):
""" Hash a password using given method and return it.
:param password: the password to hash
:param method: the hashing method
:return: hashed password
"""
meth = '_auth_%s' % method.lower()
if hasattr(self, meth):
auth = getattr(self, meth)
return auth(password)
else:
raise CCConf.BadMethodError('Bad hashing method: %s' % repr(method))
def authentify(self, login, password):
""" Authentify the client providing its login and password. The function
return the role of the client on success, or ``None``.
:param login: the login of the client
:param password: the password of the client
:return: the client's role or None on failed authentication
:raise CCConf.UnknownAccount: if user login is unknown
"""
conf = self._get_conf(login)
passwd_conf = conf['password']
# Check if account password is disabled or if account is disabled:
if passwd_conf is None:
return None
is_valid = False
m = CCConf.RE_SALTPW.match(passwd_conf)
if m is None:
meth = '_auth_plain'
password_wo_method = passwd_conf
else:
meth = '_auth_%s' % m.group('method').lower()
password_wo_method = m.group('password')
if hasattr(self, meth):
auth = getattr(self, meth)
is_valid = auth(password, password_wo_method) == passwd_conf
else:
self.logger.warning('Bad authentication method for %s: '
'%s', login, m.group('method'))
if is_valid:
return conf['role']
else:
return None
@writer
def set_password(self, login, password, method='ssha'):
""" Update the client's password in the configuration.
:param login: login of the user
:param password: new password
:param method: the hashing method to use
:raise CCConf.UnknownAccount: if user login is unknown
"""
conf = self._get_conf(login)
password = self._hash_password(password, method)
conf['password'] = password
self._set_conf(login, conf)
@writer
def add_tag(self, login, tag_name, tag_value):
""" Add the tag to the user.
:param login: login of the user
:param tag_name: tag name to add to the user
:param tag_value: the tag value
:raise CCConf.UnknownAccount: if user login is unknown
"""
self.logger.debug('Added tag %s:%s for %s account',
tag_name, tag_value, login)
conf = self._get_conf(login)
conf['tags'][tag_name] = tag_value
self._set_conf(login, conf)
@writer
def remove_tag(self, login, tag):
""" Remove the tag to the user.
:param login: login of the user
:param tag: tag to remove to the user
:raise CCConf.UnknownAccount: if user login is unknown
"""
self.logger.debug('Removed tag %s for %s account', login, tag)
conf = self._get_conf(login)
if tag in conf['tags']:
del conf['tags'][tag]
self._set_conf(login, conf)
@writer
def remove_account(self, login):
""" Remove the configuration of the account.
:param login: login of the account to remove
:raise CCConf.UnknownAccount: if user login is unknown
"""
self.logger.debug('Removed %s account', login)
filename = os.path.join(self._path, '%s.json' % login)
if os.path.exists(filename):
os.remove(filename)
else:
raise CCConf.UnknownAccount('%s is not a file' % filename)
@writer
def create_account(self, login, role, password):
""" Create a new account.
:param login: login of the new user
:param password: password of the new user
:param role: the role of the new user
:raise CCConf.AlreadyExistingAccount: if the login is already
"""
self.logger.debug('Creating %s account with role %s', login, role)
filename = os.path.join(self._path, '%s.json' % login)
if os.path.exists(filename):
raise CCConf.AlreadyExistingAccount('%s found' % filename)
else:
conf = CCConf.CONF_TEMPLATE.copy()
conf['role'] = role
if password is not None:
conf['password'] = self._hash_password(password)
self._set_conf(login, conf, create=True)
@writer
def copy_account(self, copy_login, login, password):
""" Create a new account based on another.
:param copy_login: the login of the account to copy.
:param password: password of the new user
:param role: the role of the new user
:raise CCConf.AlreadyExistingAccount: if the login is already
:raise CCConf.UnknownAccount: if the copy login doesn't exist
"""
conf_copy = self._get_conf(copy_login)
self.create_account(login, conf_copy['role'], password)
password = self._get_conf(login)['password']
conf_copy['password'] = password
self._set_conf(login, conf_copy)
@writer
def add_right(self, login, tql, method=None, target='allow', index=None):
""" Add a right rule to the provided account.
:param login: the login of the account
:param tql: the TQL request to allow
:param method: the method to which apply the right, None if all
:param allow: True if the rules allow the call, or False if it deny.
:param index: the index of the new rule, set None if the rule is
appended to the end of the ruleset.
.. note::
If the index is out of range, the rule will be added to the end of
the ruleset.
"""
conf = self._get_conf(login)
rights = conf['rights']
rule = {'tql': tql, 'method': method, 'target': target}
if index is None:
index = len(rights)
rights.insert(index, rule)
self._set_conf(login, conf)
@writer
def remove_right(self, login, index):
""" Remove a right rule from the provided account.
:param login: the login of the account
:param index: the index of the rule to delete or None for all rules
"""
conf = self._get_conf(login)
if index is None:
conf['rights'] = []
else:
try:
conf['rights'].pop(index)
except IndexError:
raise CCConf.OutOfRangeIndexError('Bad rule index '
'%s' % repr(index))
self._set_conf(login, conf)
def list_accounts(self):
""" List all registered accounts.
:return: :class:`tuple` of :class:`str`, each item being an
account login
"""
logins = []
for filename in os.listdir(self._path):
login, ext = os.path.splitext(filename)
if ext == '.json':
logins.append(login)
return tuple(logins)
class UnknownAccount(Exception):
pass
class AlreadyExistingAccount(Exception):
pass
class BadMethodError(Exception):
pass
class OutOfRangeIndexError(Exception):
pass
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/db.py 0000664 0000000 0000000 00000015126 12545176055 0025732 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" This module contains some cc-server specific helpers for the CloudControl
tags database provided in cc-commons.
"""
from abc import abstractproperty
from datetime import datetime
from collections import defaultdict
from sjrpc.core import AsyncWatcher
from cloudcontrol.common.tql.db.tag import BaseTag, BaseTagInterface
from cloudcontrol.common.tql.db.object import TqlObject
from cloudcontrol.common.tql.db.requestor import StaticRequestor, fetcher
class SObject(TqlObject):
""" A TQL object with features specific to cc-server.
"""
def __init__(self, *args, **kwargs):
super(SObject, self).__init__(*args, **kwargs)
self._overridden = defaultdict(lambda: None)
def register(self, tag, override=False):
""" Register a tag on this object (or override).
"""
if override:
# The tag to register must override an eventual existing tag.
# Overridden tag is moved in the overridden tags dict:
if tag.name in self._tags:
self._overridden[tag.name] = self._tags[tag.name]
del self._tags[tag.name]
elif tag.name in self._tags:
# The tag to register is already overridden, we place it directly
# on the overridden tags dict:
if tag.name in self._overridden:
raise KeyError('A tag with this name is already registered on this object')
self._overridden[tag.name] = tag
return
return super(SObject, self).register(tag)
def unregister(self, name, override=False):
""" Unregister a tag on this object (or remove override).
"""
super(SObject, self).unregister(name)
# If a tag is overriden, replace it on the tag list:
if override and name in self._overridden:
self._tags[name] = self._overridden[name]
del self._overridden[name]
def is_overriding(self, name):
""" Return True if a tag is overriding another one for the name.
If the tag is not found, False is returned.
"""
return self._overridden[name] is not None
class CCSAsyncTagInterface(BaseTagInterface):
@abstractproperty
def client(self):
""" The client on which do the fetch.
"""
@abstractproperty
def cached(self):
""" The cached value stored in the tag. Can raise an OutdatedCacheError
if the cache is out of date.
"""
@cached.setter
def cached(self):
""" Write the new cached value.
"""
class RemoteTag(BaseTag):
""" A tag which is available remotely on a client.
"""
def __init__(self, name, callback, ttl=None):
super(RemoteTag, self).__init__(name)
self._callback = callback
self._ttl = ttl
self._cache_last_update = None
self._cache_value = u''
def __repr__(self):
return '<%s %s (ttl=%r expire=%r cached=%r)>' % (self.__class__.__name__,
self.name,
self._ttl,
self.ttl,
self._cache_value)
@property
def callback(self):
return self._callback
@property
def ttl(self):
if self._cache_last_update is None:
return 0
elif self._ttl is None:
return float('inf')
else:
dt = datetime.now() - self._cache_last_update
age = dt.seconds + dt.days * 60 * 60 * 24
return self._ttl - age
@ttl.setter
def ttl(self, value):
self._ttl = value
@property
def cached(self):
""" Get the cached value.
"""
if self.ttl > 0:
return self._cache_value
else:
raise self.OutdatedCacheError('Cache is out of date')
@cached.setter
def cached(self, value):
""" Set the cached value.
"""
self._cache_value = value
self._cache_last_update = datetime.now()
def invalidate(self):
self._cache_last_update = None
class OutdatedCacheError(Exception):
pass
CCSAsyncTagInterface.register(RemoteTag)
class SRequestor(StaticRequestor):
@fetcher(CCSAsyncTagInterface)
def fetcher_ccs_async(self, map):
""" Fetching method using asynchronous call from sjrpc to get values
from the remote client.
"""
watcher = AsyncWatcher()
# Do the requests to remote clients:
for obj, tags in map.iteritems():
to_update = set()
for tag in tags:
try:
obj.set(tag.name, tag.cached)
except tag.OutdatedCacheError:
to_update.add(tag)
if to_update:
# All remote tags of an object are always bound to the same
# client. Request for tag value is made in a single call to
# avoid multiple query/response, so we take the callback from
# the first tag to do the update on the whole:
cb = tuple(to_update)[0].callback
cb(watcher, obj, [t.name for t in to_update])
# Get and process the results:
for update in watcher.iter(timeout=4, raise_timeout=True): #TODO: adaptative timeout
requested_tags, obj = update['data']
if 'return' not in update:
for tag_name in requested_tags:
obj.set(tag_name, '#ERR#')
else:
tags = update['return']
for tag_name in requested_tags:
tag_value = tags.get(tag_name)
if tag_value is None:
obj.set(tag_name, '#ERR')
else:
obj.set(tag_name, unicode(tag_value))
obj[tag_name].cached = unicode(tag_value) # Set the tag cache value
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/election.py 0000664 0000000 0000000 00000023455 12545176055 0027153 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" This module contains the hypervisor destination election stuff.
"""
from copy import copy
from cloudcontrol.server.exceptions import (UnknownElectionAlgo, UnknownElectionType,
ElectionError)
def tags(*args):
""" Decorator used to declare tags used by a filter.
"""
def decorator(func):
func.__tags__ = set(args)
return func
return decorator
class Elector(object):
# Filtering function for destination hypervisors:
FILTERS = {
'cold': (('is_hv', 'filter r=hv'),
('not_source_hv', 'filter source hv'),
('is_connected', 'filter connected hv'),
('vm_htype_eq_hv', 'filter bad hv types'),
('has_alloc', 'filter allocatable hv'),
('duplicate_name', 'filter vm duplicate names'),
('enough_disk', 'filter hv with not enough disk'),),
'hot': (('is_hv', 'filter r=hv'),
('not_source_hv', 'filter source hv'),
('is_connected', 'filter connected hv'),
('vm_htype_eq_hv', 'filter bad hv types'),
('has_alloc', 'filter allocatable hv'),
('duplicate_name', 'filter vm duplicate names'),
#('enough_ram', 'filter hv with not enough ram'),
('enough_disk', 'filter hv with not enough disk'),),
}
# Available algos for each types:
ALGO_BY_TYPES = {'cold': ('fair', ),
'hot': ('fair', )}
def __init__(self, server, query_vm, query_dest, client):
# The server instance for this election:
self._server = server
# The TQL query to select vm:
self._query_vm = query_vm
# The TQL query to select destination hypervisor:
self._query_dest = query_dest
# The client who requested the election:
self._client = client
def election(self, mtype, algo):
""" Generate a new migration plan for this election. You must specify
the migration type and the distribution algoritm.
:param mtype: the migration type
:param algo: the distribution algoritm
"""
# Check the choosen election method:
if mtype not in self.ALGO_BY_TYPES:
raise UnknownElectionType('%r is unknown migration type' % mtype)
else:
if algo not in self.ALGO_BY_TYPES[mtype]:
raise UnknownElectionAlgo('%r is unknown migration algo' % algo)
else:
func = '_algo_%s' % algo
if not hasattr(self, func):
raise UnknownElectionAlgo('%r not found' % func)
else:
distribute = getattr(self, func)
# Get the destination hypervisor candidates:
candidates, errors = self._get_candidates(self.FILTERS[mtype])
# Check if VM migration election raised an error:
if errors:
# If errors are found, we report only the first of them, because we
# have currently no way to report problem for each vm:
vm, desc = errors[0]
raise ElectionError('No destination found for %r, last destination'
' filtered by %s' % (vm['id'], desc))
# Distributes VMs to each candidate:
migration_plan = distribute(mtype, candidates)
# Return the migration plan:
return migration_plan
def _get_candidates(self, filters):
# Get all the tags needed for hypervisors and construct the final
# filter list:
filterfuncs = []
hv_tags = set()
for name, desc in filters:
filterfunc = getattr(self, '_filter_%s' % name)
filterfuncs.append((filterfunc, desc))
hv_tags |= getattr(filterfunc, '__tags__', set())
# Get the selected vms and hvs:
vms = self._client.list(self._query_vm, show=('*',), method='migrate')
hvs = self._client.list(self._query_dest, show=hv_tags, method='migrate')
candidates = []
errors = []
# Filters the candidates:
for vm in vms:
if vm['r'] != 'vm':
continue
vm_dest = copy(hvs)
for func, desc in filterfuncs:
vm_dest = func(vm, vm_dest)
# VM is added to errors if none destination HV is found for it:
if not vm_dest:
errors.append((vm, desc))
break
else:
candidates.append((vm, vm_dest))
return candidates, errors
#####
##### Distribution algorithm methods:
#####
def _algo_fair(self, mtype, candidates):
migration_plan = []
# Sort vm by number of destination hv:
candidates = sorted(candidates, key=lambda x: len(x[1]))
hv_alloc = {}
for vm, hvs in candidates:
if not hvs:
raise ElectionError('No destination found for %r vm' % vm['id'])
else:
# Try to take an hypervisor that is not already in the plan:
for hv in hvs:
if hv['id'] not in hv_alloc:
break
else:
# If all candidates for this VM are already in the
# migration plan, we take the less allocated:
hv = min(hvs, key=lambda x: hv_alloc[x['id']])
migration_plan.append({'sid': vm['id'], 'did': hv['id'],
'type': mtype})
hv_alloc[hv['id']] = hv_alloc.get(hv['id'], 0) + 1
return migration_plan
#####
##### Filtering methods:
#####
@tags('r')
def _filter_is_hv(self, vm, hvs):
returned = []
for hv in hvs:
if hv.get('r') == 'hv':
returned.append(hv)
return returned
@tags('con')
def _filter_is_connected(self, vm, hvs):
returned = []
for hv in hvs:
if hv.get('con'):
returned.append(hv)
return returned
@tags('htype')
def _filter_vm_htype_eq_hv(self, vm, hvs):
returned = []
vm_htype = vm.get('htype')
for hv in hvs:
if hv.get('htype') == vm_htype:
returned.append(hv)
return returned
@tags('p')
def _filter_not_source_hv(self, vm, hvs):
returned = []
for hv in hvs:
if hv['id'] != vm['p']:
returned.append(hv)
return returned
@tags('alloc')
def _filter_has_alloc(self, vm, hvs):
returned = []
for hv in hvs:
if hv.get('alloc', False):
returned.append(hv)
return returned
def _filter_duplicate_name(self, vm, hvs):
returned = []
hv_id, _, vm_name = vm['id'].partition('.')
for hv in hvs:
vms = self._server.list('id:%s.*$h' % hv['id'])
duplicate = False
for vm in vms:
if vm.get('h') == vm_name:
duplicate = True
break
if not duplicate:
returned.append(hv)
return returned
@tags('memfree')
def _filter_enough_ram(self, vm, hvs):
returned = []
for hv in hvs:
if int(hv.get('memfree', 0)) >= int(vm.get('mem', 0)):
returned.append(hv)
return returned
@tags('cpu')
def _filter_enough_core(self, vm, hvs):
returned = []
for hv in hvs:
# Calculate the total number of vcpu used by VMs:
vms = self.manager.server.list('id:%s.*$cpu' % hv['id'])
count = 0
for vm in vms:
count += int(vm.get('cpu', 0))
if int(hv.get('cpu', 0)) >= count + int(self.vm.get('cpu', 0)):
returned.append(hv)
return returned
@tags('*')
def _filter_enough_disk(self, vm, hvs):
returned = []
return hvs
# Calculate the size needed for each pools:
pools = {}
for disk in vm['disk'].split():
size = vm['disk%s_size' % disk]
pool = vm['disk%s_pool' % disk]
vol = vm['disk%s_vol' % disk]
dpool = pools.get(pool, {'size': 0, 'vols': set()})
dpool['size'] += int(size)
dpool['vols'].add(vol)
pools[pool] = dpool
# Check for each HV if it match:
for hv in hvs:
good = True
for pool, prop in pools.iteritems():
free = int(hv.get('sto%s_free' % pool, 0))
vols = set(hv.get('sto%s_vol' % pool, '').split())
if free < prop['size'] or prop['vols'] & vols:
good = False
if good:
returned.append(hv)
return returned
def _filter_not_locked(self, vm, hvs):
if len(hvs) == 1:
return hvs
returned = []
for hv in hvs:
lock = self._server.get_client(hv['id']).hvlock
if not lock.locked():
returned.append(hv)
if len(returned) == 0:
return hvs
return returned
def _filter_biggest_id(self, hvs):
return sorted(hvs, key=lambda x: x['id'])[-1]
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/exceptions.py 0000664 0000000 0000000 00000002617 12545176055 0027527 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
class AlreadyRegistered(Exception):
pass
class AuthenticationError(Exception):
pass
class RightError(Exception):
pass
class BadObjectError(Exception):
pass
class NotConnectedAccountError(Exception):
pass
class ReservedTagError(Exception):
pass
class BadRoleError(Exception):
pass
class JobError(Exception):
pass
class BadJobTypeError(Exception):
pass
class UnknownJobError(Exception):
pass
class UnknownElectionType(Exception):
pass
class UnknownElectionAlgo(Exception):
pass
class UnknownMigrationType(Exception):
pass
class UnknownObjectError(Exception):
pass
class ElectionError(Exception):
pass
class CloneError(Exception):
pass
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/handlers.py 0000664 0000000 0000000 00000006703 12545176055 0027146 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
import inspect
from sjrpc.utils import RpcHandler
from cloudcontrol.server import __version__
def listed(func):
func.__listed__ = True
return func
class Reporter(object):
""" Simple class used to report error, warning and success of command execution.
"""
def __init__(self):
self._reports = []
def get_dict(self):
return {'objects': self._reports,
'order': ['id', 'status', 'message', 'output']}
def success(self, oid, message, output=None, jobs=None):
report = {'id': oid, 'status': 'success',
'message': message, 'output': output}
if jobs is not None:
if isinstance(jobs, basestring):
jobs = [jobs]
jobs = ' '.join(jobs)
report['jobs'] = jobs
self._reports.append(report)
def warn(self, oid, message, output=None):
self._reports.append({'id': oid, 'status': 'warn',
'message': message, 'output': output})
def error(self, oid, message, output=None):
self._reports.append({'id': oid, 'status': 'error',
'message': message, 'output': output})
class CCHandler(RpcHandler):
""" Base class for handlers of CloudControl server.
This class provide the following features:
- functions can be used to get all functions decorated with the listed
decorator
- version can be used to get the current cc-server version
"""
def __init__(self, client):
self.client = client
self.server = client.server # Shortcut to server
self.conf = client.server.conf # Shortcut to configuration
def __getitem__(self, name):
if name.startswith('_'):
# Filter the private members access:
raise KeyError('Remote name %s is private.' % repr(name))
else:
self.logger.debug('Called %s.%s', self.__class__.__name__, name)
return super(CCHandler, self).__getitem__(name)
@property
def logger(self):
return self.client.logger
@listed
def functions(self):
""" Show the list of functions available to the peer.
:return: list of dict with keys name and description.
"""
cmd_list = []
for attr in dir(self):
attr = getattr(self, attr, None)
if getattr(attr, '__listed__', False):
cmd = {}
cmd['name'] = attr.__name__
doc = inspect.getdoc(attr)
if doc:
cmd['description'] = inspect.cleandoc(doc)
cmd_list.append(cmd)
return cmd_list
@listed
def version(self):
""" Return the current server version.
:return: the version
"""
return __version__
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/jobs/ 0000775 0000000 0000000 00000000000 12545176055 0025723 5 ustar 00root root 0000000 0000000 cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/jobs/__init__.py 0000664 0000000 0000000 00000002277 12545176055 0030044 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" Server jobs.
"""
from cloudcontrol.server.jobs.coldmigration import ColdMigrationJob
from cloudcontrol.server.jobs.hotmigration import HotMigrationJob
from cloudcontrol.server.jobs.clone import CloneJob
from cloudcontrol.server.jobs.killoldcli import KillOldCliJob
from cloudcontrol.server.jobs.killclient import KillClientJob
from cloudcontrol.server.jobs.allocation import AllocationJob
__all__ = ('ColdMigrationJob', 'HotMigrationJob', 'CloneJob', 'KillOldCliJob',
'KillClientJob', 'AllocationJob') cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/jobs/allocation.py 0000664 0000000 0000000 00000007055 12545176055 0030431 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
import threading
from cloudcontrol.common.jobs import Job
from cloudcontrol.server.allocator import Allocator, AllocationError
class DeployError(Exception):
""" Exception raised when an error occurs while deploying a
virtual machine on an hypervisor.
"""
class AllocationJob(Job):
""" Allocate a set of VM on hypervisors.
"""
# Global allocation lock. Must be locked before to do allocation for a
# single virtual machine:
allocation_lock = threading.Lock()
def job(self, server, client, expanded_vmspec, tql_target=None):
allocator = Allocator(self.logger.getChild('allocator'), server, client)
results_by_vm = {}
total = len(expanded_vmspec)
errors = 0
for i, vmspec in enumerate(expanded_vmspec):
self.title = 'Virtual machines allocation (%d/%d, %d errors)' % (i + 1, total, errors)
with AllocationJob.allocation_lock:
self.checkpoint()
try:
target_hv_name = allocator.allocate(vmspec, tql_target)[0]
except AllocationError, err:
self.logger.warn('VM %s: allocation error: %s. Skipping...' % (vmspec['title'], err))
results_by_vm[vmspec['title']] = 'allocation error, %s' % err
errors += 1
except Exception, err:
self.logger.exception('VM %s: unknow error while allocation: %s. Skipping...' % (vmspec['title'], err))
results_by_vm[vmspec['title']] = 'unknown error (see logs)'
errors += 1
else:
# Keep the VM target in a tag for future migrations
if 'tags' not in vmspec:
vmspec['tags'] = {}
vmspec['tags'].setdefault('target', vmspec['target'])
try:
vm_uuid = self._deploy(vmspec, server.get_client(target_hv_name))
except Exception as err:
results_by_vm[vmspec['title']] = 'Error: %s' % err
errors += 1
else:
self.logger.info('VM %s: spawned on %s' % (vmspec['title'], target_hv_name))
results_by_vm[vmspec['title']] = 'spawned %s on %s' % (vm_uuid, target_hv_name)
# Write a summary of the Allocation job in the results attachment:
results = self.attachment('results')
for vm_name, result in results_by_vm.iteritems():
results.write('%s: %s\n' % (vm_name, result))
def _deploy(self, vmspec, target_hv):
# Delete keys which are not recognized by subsequent components:
for key in ('target', 'flags', 'riskgroup'):
try:
del vmspec[key]
except KeyError:
pass
return target_hv.define(vmspec, format='vmspec')
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/jobs/clone.py 0000664 0000000 0000000 00000020200 12545176055 0027367 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
import re
from sjrpc.core import AsyncWatcher
from cloudcontrol.common.jobs import Job, JobCancelError
from cloudcontrol.server.utils import AcquiresAllOrNone
from cloudcontrol.server.exceptions import RightError
class CloneJob(Job):
""" A clone job.
Mandatory items:
* vm_name: name of the vm to migrate
* new_vm_name: the name of the cloned vm
* hv_source: name of the hv which execute the VM
* hv_dest: the destination hypervisor
* author: login of the author cli
"""
def job(self, server, client, hv_source, vm_name, hv_dest, new_vm_name):
self._func_cancel_xfer = None # Callback to a function used to cancel
# a disk transfert
vm_id = '%s.%s' % (hv_source, vm_name)
self.title = 'Clone %s --> %s' % (vm_id, hv_dest)
self.logger.info('Started clone for %s', vm_id)
# Cancel the job if the user has not the right to clone the vm or to
# select an hypervisor as destination:
try:
client.check('clone', 'id=%s' % vm_id)
except RightError:
raise JobCancelError('author have no rights to clone this VM')
try:
client.check('clone', 'id=%s' % hv_dest)
except RightError:
raise JobCancelError('author have no right to clone to this hv')
# Update the VM object:
vm = server.db.get_by_id(vm_id)
if vm is None:
raise JobCancelError('Source VM not found')
# Get the source and destination hv clients:
try:
source = server.get_client(hv_source)
except KeyError:
raise JobCancelError('source hypervisor is not connected')
try:
dest = server.get_client(hv_dest)
except KeyError:
raise JobCancelError('destination hypervisor is not connected')
self.checkpoint()
self.report('waiting lock for source and dest hypervisors')
self.logger.info('Trying to acquire locks')
with AcquiresAllOrNone(source.hvlock, dest.hvlock):
self.logger.info('Locks acquired')
self.checkpoint()
before_clone_autostart = vm['autostart'].lower() == 'yes'
# Create storages on destination:
old_new_disk_mapping = {} # Mapping between old and new disk names
names = {}
self.report('create volumes')
for disk in vm.get('disk', '').split():
# Getting informations about the disk:
pool = vm.get('disk%s_pool' % disk)
name = vm.get('disk%s_vol' % disk)
size = vm.get('disk%s_size' % disk)
assert pool is not None, 'pool tag doesn\'t exists'
assert name is not None, 'name tag doesn\'t exists'
assert size is not None, 'size tag doesn\'t exists'
# Change the name of the disk:
old_name = name
if name.startswith(vm_name):
suffix = name[len(vm_name):]
name = new_vm_name + suffix
else:
name = '%s_%s' % (new_vm_name, name)
names[disk] = name
fulloldname = '/dev/%s/%s' % (pool, old_name)
fullnewname = '/dev/%s/%s' % (pool, name)
old_new_disk_mapping[fulloldname] = fullnewname
# Create the volume on destination:
dest.proxy.vol_create(pool, name, int(size))
self.logger.info('Created volume %s/%s on destination '
'hypervisor (was %s)', pool, name, old_name)
# Rollback stuff for this action:
def rb_volcreate():
dest.proxy.vol_delete(pool, name)
self.checkpoint(rb_volcreate)
# Define VM:
self.report('define vm')
self.logger.info('XML configuration transfert')
vm_config = source.proxy.vm_export(vm_name)
# Change vm configuration XML to update it with new name:
new_vm_config = self._update_xml(vm_config, vm_name,
new_vm_name,
old_new_disk_mapping)
dest.proxy.vm_define(new_vm_config)
# Rollback stuff for vm definition:
def rb_define():
dest.proxy.vm_undefine(name)
self.checkpoint(rb_define)
# Copy all source disk on destination disk:
for disk, name in names.iteritems():
self._copy_disk(source, dest, vm, disk, name)
# Setup autostart as it was before the migration
dest.proxy.vm_set_autostart(new_vm_name,
before_clone_autostart)
self.logger.info('Cloning completed with success')
def _update_xml(self, vm_config, old_name, name, old_new_name_mapping):
""" Update the XML definition of the VM with the new vm name, the new
vm disk, and remove the uuid tag.
"""
vm_config = vm_config.replace('%s' % old_name,
'%s' % name)
vm_config = re.sub('.*?\n', '', vm_config)
# delete MAC address, then it will be regenerated by libvirt
vm_config = re.sub('\n', '', vm_config)
for old, new in old_new_name_mapping.iteritems():
vm_config = vm_config.replace("='%s'" % old,
"='%s'" % new)
return vm_config
def _copy_disk(self, source, dest, vm, disk, new_disk):
""" Copy the specified disk name of the vm from source to dest.
"""
# Get informations about the disk:
pool = vm.get('disk%s_pool' % disk)
name = vm.get('disk%s_vol' % disk)
self.report('copy %s/%s' % (pool, name))
# Make the copy and wait for it end:
xferprop = dest.proxy.vol_import(pool, new_disk)
# Register the cancel function:
def cancel_xfer():
dest.proxy.vol_import_cancel(xferprop['id'])
self._func_cancel_xfer = cancel_xfer
# Wait for the end of transfert:
watcher = AsyncWatcher()
watcher.register(source.conn, 'vol_export', pool, name, dest.ip, xferprop['port'])
watcher.register(dest.conn, 'vol_import_wait', xferprop['id'])
# Loop to poll status while job is running
job_running = True
while job_running:
# Timeout to get transfered bytes and update status
msgs = watcher.wait(timeout=10)
if not msgs:
_, percent = dest.proxy.vol_transfer_status(xferprop['id'])
self.report('copy %s/%s: %d%% done' % (pool, name, percent))
else:
job_running = False
self._func_cancel_xfer = None
# Compare checksum of two answers:
checksums = []
assert len(msgs) == 2
for msg in msgs:
if msg.get('error') is not None:
msg = 'error while copy: %s' % msg['error']['message']
raise JobCancelError(msg)
else:
checksums.append(msg['return'].get('checksum'))
self.checkpoint()
if checksums[0] != checksums[1]:
raise JobCancelError('checksum mismatches')
def cancel(self):
if self._func_cancel_xfer is not None:
self._func_cancel_xfer()
super(CloneJob, self).cancel()
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/jobs/coldmigration.py 0000664 0000000 0000000 00000016720 12545176055 0031136 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
from sjrpc.core import AsyncWatcher
from cloudcontrol.common.jobs import Job, JobCancelError
from cloudcontrol.server.utils import AcquiresAllOrNone
from cloudcontrol.server.exceptions import RightError
class ColdMigrationJob(Job):
""" A cold vm migration job.
Mandatory items:
* vm_name: name of the vm to migrate
* hv_source: name of the hv which execute the VM
* hv_dest: the destination hypervisor
* author: login of the author cli
"""
def job(self, server, client, hv_source, vm_name, hv_dest):
self._func_cancel_xfer = None # Callback to a function used to cancel
# a disk transfert
vm_id = '%s.%s' % (hv_source, vm_name)
self.title = 'Cold migration %s --> %s' % (vm_id, hv_dest)
self.logger.info('Started migration for %s', vm_id)
# Cancel the job if the user has not the right to migrate the vm or to
# select an hypervisor as destination:
try:
client.check('migrate', 'id=%s' % vm_id)
except RightError:
raise JobCancelError('author have no rights to migrate this VM')
try:
client.check('migrate', 'id=%s' % hv_dest)
except RightError:
raise JobCancelError('author have no right to migrate to this hv')
# Update the VM object:
vm = server.db.get_by_id(vm_id)
if vm is None:
raise JobCancelError('Source VM not found')
# Get the source and destination hv clients:
try:
source = server.get_client(hv_source)
except KeyError:
raise JobCancelError('source hypervisor is not connected')
try:
dest = server.get_client(hv_dest)
except KeyError:
raise JobCancelError('destination hypervisor is not connected')
self.checkpoint()
self.report('waiting lock for source and dest hypervisors')
with AcquiresAllOrNone(source.hvlock, dest.hvlock):
self.logger.info('Locks acquired')
self.checkpoint()
if not vm['status'] == 'stopped':
raise JobCancelError('vm is not stopped')
before_migrate_autostart = vm['autostart'].lower() == 'yes'
# Create storages on destination:
self.report('create volumes')
for disk in vm.get('disk', '').split():
# Getting informations about the disk:
pool = vm.get('disk%s_pool' % disk)
name = vm.get('disk%s_vol' % disk)
size = vm.get('disk%s_size' % disk)
assert pool is not None, 'pool tag doesn\'t exists'
assert name is not None, 'name tag doesn\'t exists'
assert size is not None, 'size tag doesn\'t exists'
# Create the volume on destination:
dest.proxy.vol_create(pool, name, int(size))
self.logger.info('Created volume %s/%s on destination '
'hypervisor', pool, name)
# Rollback stuff for this action:
def rb_volcreate():
dest.proxy.vol_delete(pool, name)
self.checkpoint(rb_volcreate)
# Define VM:
self.report('define vm')
self.logger.info('XML configuration transfert')
vm_config = source.proxy.vm_export(vm_name)
dest.proxy.vm_define(vm_config)
# Rollback stuff for vm definition:
def rb_define():
dest.proxy.vm_undefine(vm_name)
self.checkpoint(rb_define)
# Copy all source disk on destination disk:
for disk in vm.get('disk', '').split():
self._copy_disk(source, dest, vm, disk)
# At this point, if operation is a success, all we need is just to
# cleanup source hypervisor from disk and vm. This operation *CAN'T*
# be cancelled or rollbacked if anything fails (unlikely). The
# migration must be considered as a success, and the only way to
# undo this is to start a new migration in the other way.
# Delete the rollback list.
# This is mandatory to avoid data loss if the cleanup
# code below fail.
self._wayback = []
# Cleanup the disks:
for disk in vm.get('disk', '').split():
pool = vm.get('disk%s_pool' % disk)
name = vm.get('disk%s_vol' % disk)
source.proxy.vol_delete(pool, name)
# Cleanup the VM:
source.proxy.vm_undefine(vm_name)
# Setup autostart as it was before the migration
dest.proxy.vm_set_autostart(vm_name,
before_migrate_autostart)
self.logger.info('Migration completed with success')
def _copy_disk(self, source, dest, vm, disk):
""" Copy the specified disk name of the vm from source to dest.
"""
# Get informations about the disk:
pool = vm.get('disk%s_pool' % disk)
name = vm.get('disk%s_vol' % disk)
self.logger.info('Started copy for %s/%s', pool, name)
self.report('copy %s/%s' % (pool, name))
# Make the copy and wait for it end:
xferprop = dest.proxy.vol_import(pool, name)
# Register the cancel function:
def cancel_xfer():
dest.proxy.vol_import_cancel(xferprop['id'])
self._func_cancel_xfer = cancel_xfer
# Wait for the end of transfert:
watcher = AsyncWatcher()
watcher.register(source.conn, 'vol_export', pool, name, dest.ip, xferprop['port'])
watcher.register(dest.conn, 'vol_import_wait', xferprop['id'])
# Loop to poll status while job is running
job_running = True
while job_running:
# Timeout to get transfered bytes and update status
msgs = watcher.wait(timeout=10)
if not msgs:
_, percent = dest.proxy.vol_transfer_status(xferprop['id'])
self.report('copy %s/%s: %s%% done' % (pool, name, percent))
else:
job_running = False
self._func_cancel_xfer = None
# Compare checksum of two answers:
checksums = []
assert len(msgs) == 2
for msg in msgs:
if msg.get('error') is not None:
msg = 'error while copy: %s' % msg['error']['message']
raise JobCancelError(msg)
else:
checksums.append(msg['return'].get('checksum'))
self.checkpoint()
if checksums[0] != checksums[1]:
raise JobCancelError('checksum mismatches')
def cancel(self):
if self._func_cancel_xfer is not None:
self._func_cancel_xfer()
super(ColdMigrationJob, self).cancel()
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/jobs/hotmigration.py 0000664 0000000 0000000 00000020702 12545176055 0031002 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
import time
from sjrpc.core import AsyncWatcher
from cloudcontrol.common.jobs import Job, JobCancelError
from cloudcontrol.server.utils import AcquiresAllOrNone
from cloudcontrol.server.exceptions import RightError
class HotMigrationJob(Job):
""" A hot vm migration job.
Mandatory items:
* vm_name: name of the vm to migrate
* hv_source: name of the hv which execute the VM
* hv_dest: the destination hypervisor
* author: login of the author cli
"""
def job(self, server, client, hv_source, vm_name, hv_dest):
vm_id = '%s.%s' % (hv_source, vm_name)
self.title = 'Hot migration %s --> %s' % (vm_id, hv_dest)
self.logger.info('Started hot migration for %s', vm_id)
# Cancel the job if the user has not the right to migrate the vm or to
# select an hypervisor as destination:
try:
client.check('migrate', 'id=%s' % vm_id)
except RightError:
raise JobCancelError('author have no rights to migrate this VM')
try:
client.check('migrate', 'id=%s' % hv_dest)
except RightError:
raise JobCancelError('author have no right to migrate to this hv')
# Update the VM object:
vm = server.db.get_by_id(vm_id)
if vm is None:
raise JobCancelError('Source VM not found')
# Get the source and destination hv clients:
try:
source = server.get_client(hv_source)
except KeyError:
raise JobCancelError('source hypervisor is not connected')
try:
dest = server.get_client(hv_dest)
except KeyError:
raise JobCancelError('destination hypervisor is not connected')
self.checkpoint()
self.report('waiting lock for source and dest hypervisors')
with AcquiresAllOrNone(source.hvlock, dest.hvlock):
self.logger.info('Locks acquired')
self.checkpoint()
before_clone_autostart = vm['autostart'].lower() == 'yes'
if not vm['status'] == 'running':
raise JobCancelError('vm is not stopped')
to_cleanup = []
# Create storages on destination and start synchronization:
disks = vm.get('disk', '').split()
for disk in disks:
to_cleanup += self._sync_disk(vm, disk, source, dest)
# Libvirt tunnel setup:
tunres_src = source.proxy.tun_setup()
def rb_tun_src():
source.proxy.tun_destroy(tunres_src)
self.checkpoint(rb_tun_src)
tunres_dst = dest.proxy.tun_setup(local=False)
def rb_tun_dst():
dest.proxy.tun_destroy(tunres_dst)
self.checkpoint(rb_tun_dst)
source.proxy.tun_connect(tunres_src, tunres_dst, dest.ip)
dest.proxy.tun_connect_hv(tunres_dst, migration=False)
# Migration tunnel setup:
migtunres_src = source.proxy.tun_setup()
def rb_migtun_src():
source.proxy.tun_destroy(migtunres_src)
self.checkpoint(rb_migtun_src)
migtunres_dst = dest.proxy.tun_setup(local=False)
def rb_migtun_dst():
dest.proxy.tun_destroy(migtunres_dst)
self.checkpoint(rb_migtun_dst)
source.proxy.tun_connect(migtunres_src, migtunres_dst, dest.ip)
dest.proxy.tun_connect_hv(migtunres_dst, migration=True)
# Initiate the live migration:
self.report('migration in progress')
source.proxy.vm_migrate_tunneled(vm_name, tunres_src,
migtunres_src, _timeout=None)
# At this point, if operation is a success, all we need is just to
# cleanup source hypervisor from disk and vm. This operation *CAN'T*
# be cancelled or rollbacked if anything fails (unlikely). The
# migration must be considered as a success, and the only way to
# undo this is to start a new migration in the other way.
# Delete the rollback list.
# This is mandatory to avoid data loss if the cleanup
# code below fail.
self.report('cleanup')
self._wayback = []
source.proxy.tun_destroy(tunres_src)
dest.proxy.tun_destroy(tunres_dst)
source.proxy.tun_destroy(migtunres_src)
dest.proxy.tun_destroy(migtunres_dst)
for cb_cleanup in reversed(to_cleanup):
cb_cleanup()
# Cleanup the disks:
for disk in disks:
pool = vm.get('disk%s_pool' % disk)
name = vm.get('disk%s_vol' % disk)
source.proxy.vol_delete(pool, name)
# Setup autostart as it was before the migration
dest.proxy.vm_set_autostart(vm_name,
before_clone_autostart)
self.logger.info('Migration completed with success')
def _sync_disk(self, vm, disk, source, dest):
to_cleanup = []
# getting informations about the disk:
pool = vm.get('disk%s_pool' % disk)
name = vm.get('disk%s_vol' % disk)
size = vm.get('disk%s_size' % disk)
assert pool is not None, 'pool tag doesn\'t exists'
assert name is not None, 'name tag doesn\'t exists'
assert size is not None, 'size tag doesn\'t exists'
status_msg = 'sync volume %s/%s (%%s)' % (pool, name)
self.report(status_msg % 'creation')
# create the volume on destination:
dest.proxy.vol_create(pool, name, int(size))
self.logger.info('Created volume %s/%s on destination '
'hypervisor', pool, name)
# rollback stuff for this action:
def rb_volcreate():
dest.proxy.vol_delete(pool, name)
self.checkpoint(rb_volcreate)
# setup the drbd synchronization with each hypervisors:
self.report(status_msg % 'setup')
res_src = source.proxy.drbd_setup(pool, name)
def rb_setupsrc():
source.proxy.drbd_shutdown(res_src)
self.checkpoint(rb_setupsrc)
to_cleanup.append(rb_setupsrc)
res_dst = dest.proxy.drbd_setup(pool, name)
def rb_setupdst():
dest.proxy.drbd_shutdown(res_dst)
self.checkpoint(rb_setupdst)
to_cleanup.append(rb_setupdst)
# start connection of drbd:
self.report(status_msg % 'connect')
watcher = AsyncWatcher()
watcher.register(source.conn, 'drbd_connect', res_src, res_dst, dest.ip)
watcher.register(dest.conn, 'drbd_connect', res_dst, res_src, source.ip)
msgs = watcher.wait(timeout=30)
for msg in msgs:
if 'error' in msg:
msg = 'error while drbd_connect: %s' % msg['error']['message']
raise JobCancelError(msg)
# setup topology as primary/secondary:
source.proxy.drbd_role(res_src, True)
dest.proxy.drbd_role(res_dst, False)
# Wait the end of the synchronization:
sync_running = True
while sync_running:
status = dest.proxy.drbd_sync_status(res_dst)
if status['done']:
sync_running = False
self.report(status_msg % 'sync %s%%' % status['completion'])
time.sleep(2)
dest.proxy.drbd_role(res_dst, True)
source.proxy.drbd_takeover(res_src, True)
def rb_takeover_src():
source.proxy.drbd_takeover(res_src, False)
self.checkpoint(rb_takeover_src)
to_cleanup.append(rb_takeover_src)
dest.proxy.drbd_takeover(res_dst, True)
def rb_takeover_dst():
dest.proxy.drbd_takeover(res_dst, False)
self.checkpoint(rb_takeover_dst)
to_cleanup.append(rb_takeover_dst)
return to_cleanup
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/jobs/killclient.py 0000664 0000000 0000000 00000002174 12545176055 0030433 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
import time
from cloudcontrol.common.jobs import Job
class KillClientJob(Job):
""" A job used to kill connected accounts.
Mandatory items:
* account: the account login to kill
Optional items:
* gracetime: time before to kill the user
"""
def job(self, server, account, gracetime=None):
if gracetime is not None:
time.sleep(int(gracetime))
self.checkpoint()
server.kill(account) cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/jobs/killoldcli.py 0000664 0000000 0000000 00000003205 12545176055 0030417 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
import time
from cloudcontrol.common.jobs import Job
class KillOldCliJob(Job):
""" Typically an hidden job used to kill clients who are connected/idle
since too much time.
Mandatory items:
* maxcon: maximum connection time in minutes
* maxidle: maximum idle time in minutes
Optional items:
* delay: delay in secondes between two checks (default 1m)
"""
DEFAULT_DELAY = 60
def job(self, server, maxcon, maxidle, delay=DEFAULT_DELAY):
self.state.title = 'Kill old clients (delay=%s)' % delay
while True:
self.checkpoint()
for client in list(server.iterclients('cli')):
if client.uptime > (maxcon * 60) or client.idle > (maxidle * 60):
server.kill(client.login)
self.logger.info('Disconnected %s because of its long time'
' or idle connection', client.login)
time.sleep(delay)
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/jobsinterface.py 0000664 0000000 0000000 00000004646 12545176055 0030170 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" An interface to the server for the JobsManager
"""
from datetime import datetime
from cloudcontrol.server.db import SObject
from cloudcontrol.common.jobs import JobsManagerInterface
from cloudcontrol.common.tql.db.tag import StaticTag, CallbackTag
from cloudcontrol.common.tql.db.helpers import taggify
class ServerJobsManagerInterface(JobsManagerInterface):
TAG_ATTRIBUTES = ('title', 'status', 'state', 'owner', 'created', 'ended',
'attachments', 'batch')
def __init__(self, server):
self._server = server
self._jobs = {} # Store each TQL object relative to a job
def on_job_created(self, job):
tql_object = SObject(job.id)
tql_object.register(StaticTag('r', 'job'))
self._server.db.register(tql_object)
self._jobs[job.id] = tql_object
def on_job_updated(self, job):
tql_object = self._jobs[job.id]
for name in self.TAG_ATTRIBUTES:
value = getattr(job, name, None)
if value is None:
continue
value = taggify(value)
if name not in tql_object:
tag = StaticTag(name, '')
tql_object.register(tag)
else:
tag = tql_object[name]
tag.value = value
if 'duration' not in tql_object:
def compute_duration():
end = datetime.now() if job.ended is None else job.ended
dt = end - job.created
return dt.seconds + dt.days * 86400
tql_object.register(CallbackTag('duration', compute_duration, ttl=0))
def on_job_purged(self, job_id):
tql_object = self._jobs[job_id]
self._server.db.unregister(tql_object)
del self._jobs[job_id] cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/repository.py 0000664 0000000 0000000 00000011252 12545176055 0027560 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" A repository class used to manage a file repository.
"""
import os
from threading import Lock
from hashlib import sha1
from cloudcontrol.server.db import SObject
from cloudcontrol.common.tql.db.tag import StaticTag
class RepositoryOperationError(Exception):
""" Error while repository operation.
"""
class Repository(object):
""" A class abstracting operations on a file repository.
"""
def __init__(self, logger, server, directory, role='file'):
self.logger = logger
self._server = server
self._directory = directory
self._lock = Lock()
self._objects = {} # Mapping between file names and tql objects
self._role = role
# Load TQL objects of already existing files:
for name in self.list():
obj_id = self._object_id(name)
sha1_hash, content = self.load(name)
tql_object = SObject(obj_id)
tql_object.register(StaticTag('r', self._role))
tql_object.register(StaticTag('hash', sha1_hash))
tql_object.register(StaticTag('name', name))
tql_object.register(StaticTag('size', len(content)))
self._objects[name] = tql_object
self._server.db.register(tql_object)
def _fullname(self, name):
""" Return the file fullname depending to its name.
"""
return os.path.join(self._directory, name)
def _object_id(self, name):
""" Return the TQL object id of the file.
"""
return '%s-%s' % (self._role, name)
def save(self, name, content):
""" Save a file to the repository.
"""
fullname = self._fullname(name)
self.logger.debug('Saving file %r to %s', name, fullname)
try:
with open(fullname, 'w') as ffile:
ffile.write(content)
except IOError as err:
msg = 'Error while saving: %s' % err
raise RepositoryOperationError(msg)
else:
obj_id = self._object_id(name)
sha1_hash = sha1(content).hexdigest()
if name in self._objects:
tql_object = self._objects[name]
tql_object['hash'].value = sha1_hash
tql_object['size'].value = len(content)
else:
tql_object = SObject(obj_id)
tql_object.register(StaticTag('r', self._role))
tql_object.register(StaticTag('hash', sha1_hash))
tql_object.register(StaticTag('name', name))
tql_object.register(StaticTag('size', len(content)))
self._server.db.register(tql_object)
self._objects[name] = tql_object
return obj_id
def load(self, name, empty_if_missing=False):
""" Load a file from the repository and compute its sha1sum.
:return: a tuple (sha1_hash, content)
"""
fullname = self._fullname(name)
try:
with open(fullname, 'r') as ffile:
content = ffile.read()
except IOError as err:
if empty_if_missing and err.errno == 2:
return sha1('').hexdigest(), ''
msg = 'Error while saving: %s' % err
raise RepositoryOperationError(msg)
else:
sha1_hash = sha1(content).hexdigest()
self.logger.debug('Loaded plugin %r, hash = %s', name, sha1_hash)
return sha1_hash, content
def delete(self, name):
""" Delete a file from the repository.
:param name: name of the file to delete
"""
tql_object = self._objects.pop(name)
if tql_object is None:
raise RepositoryOperationError('Unknown object')
self._server.db.unregister(tql_object)
try:
os.unlink(self._fullname(name))
except OSError as err:
msg = 'Error while deleting: %s' % err
raise RepositoryOperationError(msg)
def list(self):
""" List files of the repository.
"""
return os.listdir(self._directory) cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/rights.py 0000664 0000000 0000000 00000010142 12545176055 0026636 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" Rights manager.
"""
import os
import json
from threading import Lock
from fnmatch import fnmatch
class Rule(object):
""" Represent a right rule.
"""
ACCEPT = 1
DENY = 2
__slots__ = ('_match', '_method', '_tql', '_action')
def __init__(self, match, method, tql=None, action=ACCEPT):
self._match = match
self._method = method
self._tql = tql
self._action = action
def __repr__(self):
return '' % (self.match,
self.method,
self.tql,
self.action)
@classmethod
def from_dict(cls, rule_dict):
""" Construct a Rule object from a dict.
The dict must contain the following keys: match, method, tql, action
"""
action = rule_dict['action']
if action == 'accept':
action = Rule.ACCEPT
elif action == 'deny':
action = Rule.DENY
else:
raise ValueError('Bad action value, must be accept or deny')
return cls(rule_dict['match'], rule_dict['method'],
rule_dict['tql'], action)
def to_dict(self):
return {'match': self.match, 'method': self.method, 'tql': self.tql,
'action': {Rule.ACCEPT: 'accept', Rule.DENY: 'deny'}[self.action]}
@property
def match(self):
return self._match
@property
def method(self):
return self._method
@property
def tql(self):
return self._tql
@property
def action(self):
return self._action
class RightManager(object):
""" Right manager handle persistent storage of rights rules.
"""
INITIAL_RULES = [Rule('id', '*', 'id', Rule.ACCEPT)]
def __init__(self, logger, rules_filename):
self.logger = logger
self._rules_filename = rules_filename
self._rules = RightManager.INITIAL_RULES
self._lock = Lock()
# Create default ruleset if no file exists:
if not os.path.exists(self._rules_filename):
self.save()
self.logger.info('Created default ruleset')
else:
self.reload()
def export(self):
""" Export the current ruleset to list of dict.
"""
return [r.to_dict() for r in self._rules]
def load(self, ruleset, save=True):
""" Load a ruleset.
"""
self.logger.info('Loaded %d rules', len(ruleset))
with self._lock:
rules_tmp = []
for rule in ruleset:
rules_tmp.append(Rule.from_dict(rule))
self._rules = rules_tmp
if save:
self.save()
def reload(self):
""" Reload ruleset from the disk.
"""
with open(self._rules_filename, 'r') as frules:
rules = json.load(frules)
self.load(rules['ruleset'], save=False)
def save(self):
""" Save the current loaded ruleset to disk.
"""
with open(self._rules_filename, 'w') as frules:
json.dump({'ruleset': [r.to_dict() for r in self._rules]}, frules)
def iter_rules_method(self, method):
""" Iterate over rules matching the specified method.
"""
for rule in self._rules:
if fnmatch(method, rule.method):
yield rule
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/server.py 0000664 0000000 0000000 00000031240 12545176055 0026646 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" Main class of cc-server.
"""
import os
from fnmatch import fnmatch as glob
from sjrpc.server import SSLRpcServer
from sjrpc.utils import RpcHandler, pass_connection
from cloudcontrol.server.conf import CCConf
from cloudcontrol.server.exceptions import (AlreadyRegistered,
NotConnectedAccountError,
AuthenticationError,
BadRoleError)
#from cloudcontrol.server.jobs import JobsManager
from cloudcontrol.server.clients import Client
from cloudcontrol.server.rights import RightManager
from cloudcontrol.server.jobs import KillOldCliJob
from cloudcontrol.server.repository import Repository
from cloudcontrol.server.db import SObject, SRequestor
from cloudcontrol.common.tql.db.tag import StaticTag
from cloudcontrol.common.tql.db.db import TqlDatabase, TqlResponse
from cloudcontrol.common.jobs import JobsManager, JobsStore
from cloudcontrol.server.jobsinterface import ServerJobsManagerInterface
# Import all enabled roles:
import cloudcontrol.server.clients.cli
import cloudcontrol.server.clients.host
import cloudcontrol.server.clients.hv
import cloudcontrol.server.clients.bootstrap
import cloudcontrol.server.clients.spv
class WelcomeHandler(RpcHandler):
""" Default handler used on client connections of the server.
"""
def __init__(self, server):
self._server = server
@pass_connection
def authentify(self, conn, login, password):
return self._server.authenticate(conn, login, password)
class CCServer(object):
""" CloudControl server main class.
:param conf_dir: the directory that store the client configuration
:param certfile: the path to the ssl certificate
:param keyfile: the path to the ssl key
:param address: the interface to bind
:param port: the port to bind
"""
# These tags are reserved and cannot be setted by an user:
RESERVED_TAGS = ('id', 'a', 'r', 'close', 'con', 'ip', 'p')
def __init__(self, logger, conf_dir, maxcon, maxidle, certfile=None,
keyfile=None, address='0.0.0.0', port=1984):
self.logger = logger
self._clients = {} # Clients connected to the server
# The interface object to the configuration directory:
self.conf = CCConf(self.logger.getChild('conf'), conf_dir)
# Some settings:
self._maxcon = maxcon
self._maxidle = maxidle
# SSL configuration stuff:
if certfile:
self.logger.info('SSL Certificate: %s', certfile)
if keyfile:
self.logger.info('SSL Key: %s', certfile)
self.db = TqlDatabase(default_requestor=SRequestor())
# Create the rpc server:
self.logger.info('Listening on %s:%s', address, port)
self.rpc = SSLRpcServer.from_addr(address, port, certfile=certfile,
keyfile=keyfile,
conn_kw=dict(handler=WelcomeHandler(self),
on_disconnect='on_disconnect'))
self.motd_filename = os.path.join(conf_dir, 'motd')
# The jobs manager:
self.jobs = JobsManager(self.logger.getChild('jobs'),
ServerJobsManagerInterface(self),
JobsStore(os.path.join(conf_dir, 'jobs')))
# The rights manager:
self.rights = RightManager(self.logger.getChild('rights'),
os.path.join(conf_dir, 'ruleset'))
self.logger.info('Server started to running')
# Script repository:
scripts_directory = os.path.join(conf_dir, 'scripts')
if not os.path.isdir(scripts_directory):
os.mkdir(scripts_directory)
self.scripts = Repository(self.logger.getChild('scripts'), self,
scripts_directory, role='script')
# Plugin repository:
plugins_directory = os.path.join(conf_dir, 'plugins')
if not os.path.isdir(plugins_directory):
os.mkdir(plugins_directory)
self.plugins = Repository(self.logger.getChild('plugins'), self,
plugins_directory, role='plugin')
def _update_accounts(self):
""" Update the database with accounts.
"""
db_accounts = set((obj['a'].value for obj in self.db.objects if 'a' in obj))
accounts = set(self.conf.list_accounts())
to_register = accounts - db_accounts
to_unregister = db_accounts - accounts
for login in to_register:
conf = self.conf.show(login)
obj = SObject(login)
obj.register(StaticTag('r', conf['role']), override=True)
obj.register(StaticTag('a', login), override=True)
# Register static tags:
for tag, value in self.conf.show(login)['tags'].iteritems():
obj.register(StaticTag(tag, value), override=True)
self.db.register(obj)
for login in to_unregister:
self.db.unregister(login)
def iterclients(self, role=None):
""" Iterate over connected clients with an optionnal role filter.
:param role: role to filter
"""
for client in self._clients.itervalues():
if role is None or client.ROLE == role:
yield client
def authenticate(self, conn, login, password):
""" Authenticate a client against provided login and password.
If the authentication is a success, register the client on the server
and return the client role, else, raise an exception.
"""
logmsg = 'Authentication error from %s: '
with self.conf:
try:
role = self.conf.authentify(login, password)
except CCConf.UnknownAccount:
raise AuthenticationError('Unknown login')
else:
if 'close' in self.conf.show(login)['tags']:
self.logger.warning(logmsg + 'account closed (%s)', conn.getpeername(), login)
raise AuthenticationError('Account is closed')
if role is None:
self.logger.warning(logmsg + 'bad login/password (%s)', conn.getpeername(), login)
raise AuthenticationError('Bad login/password')
else:
if role not in Client.roles:
self.logger.warning(logmsg + 'bad role in account config (%s)', conn.getpeername(), login)
raise BadRoleError('%r is not a legal role' % role)
# If authentication is a success, try to register the client:
client = self.register(login, role, conn)
return client.role
def wall(self, sender, message):
""" Send a wall to all connected cli.
"""
self.logger.info('Wall from %s: %s', sender, message)
for client in self.iterclients('cli'):
client.wall(sender, message)
def register(self, login, role, connection):
""" Register a new connected account on the server.
:param login: login of the account
:param connection: connection to register
:param tags: tags to add for the client
"""
client = Client.from_role(role, None, login, self, connection)
client.logger = self.logger.getChild('clients.%s' % client.login)
if client.login in self._clients:
if client.KILL_ALREADY_CONNECTED:
self.kill(client.login)
else:
raise AlreadyRegistered('A client is already connected with this account.')
client.attach()
self._clients[client.login] = client
return client
def unregister(self, client):
""" Unregister a client.
"""
del self._clients[client.login]
def run(self):
""" Run the server mainloop.
"""
# Register accounts on the database:
self._update_accounts()
# Running server internal jobs:
self.jobs.spawn(KillOldCliJob, None, system=True,
settings={'server': self,
'maxcon': self._maxcon,
'maxidle': self._maxidle})
self.logger.debug('Running rpc mainloop')
self.rpc.run()
def get_client(self, login):
""" Get a connected client by its login.
:param login: login of the connection to get
:return: the client instance
"""
return self._clients[login]
def kill(self, login):
""" Disconnect from the server the client identified by provided login.
:param login: the login of the user to disconnect
:throws NotConnectedAccount: when provided account is not connected (or
if account doesn't exists).
"""
client = self._clients.get(login)
if client is None:
raise NotConnectedAccountError('The account %s is not '
'connected' % login)
client.shutdown()
def save_motd(self, motd):
""" Save a new message of the day.
"""
with open(self.motd_filename, 'w') as fmotd:
fmotd.write(motd)
def load_motd(self):
""" Get the current message of the day.
"""
try:
with open(self.motd_filename) as fmotd:
return fmotd.read()
except IOError as err:
if err.errno == 2:
return '' # Return empty MOTD
raise
def filter(self, tql_response, requester, method):
""" Filter the provided TqlResponse object using rules matching the
provided requester.
"""
requestor = tql_response._requestor
allowed_result = TqlResponse(requestor)
deny_rules = []
for rule in self.rights.iter_rules_method(method):
# Is the rule matching the requester object:
if requester not in self.db.raw_query(rule.match):
continue
if rule.action == rule.ACCEPT:
allowed_result |= tql_response & self.db.raw_query(rule.tql)
elif rule.action == rule.DENY:
# Defer deny action to the end of process:
deny_rules.append(rule)
for rule in deny_rules:
allowed_result = allowed_result - self.db.raw_query(rule.tql)
return allowed_result
def check(self, query, requester, method):
""" Check if the requester have right to call method on objects
matched by the provided query.
"""
matched = self.db.raw_query(query)
filtered = self.filter(matched, requester, method)
# Compare len of result is sufficient since the result of
# a filter operation is warranty to be a subset of its input:
return len(filtered) == len(matched)
def check_method(self, requester, method):
""" Check if the requester have right to call method.
"""
ok = False # Default policy is to reject
for rule in self.rights.iter_rules_method(method):
if requester not in self.db.raw_query(rule.match):
continue
if rule.action == rule.ACCEPT:
ok = True
if rule.action == rule.DENY:
return False
return ok
def list(self, query, show=None, requester=None, method=None):
""" List object authorized for requester object, using method.
"""
if method is not None and requester is None:
raise ValueError('Method defined but not requester')
self._update_accounts()
result = self.db.raw_query(query)
if method:
result = self.filter(result, requester, method)
# Render the object to dict showing specified tags:
objects = []
if show is None:
show = ()
for obj in result:
tags_to_show = set(obj.itermatchingtags(show)) | set(obj.show_tags)
result._requestor.fetch((obj,), [t.name for t in tags_to_show])
objects.append(obj.to_dict([t.name for t in tags_to_show]))
return objects
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/cloudcontrol/server/utils.py 0000664 0000000 0000000 00000004616 12545176055 0026507 0 ustar 00root root 0000000 0000000 # This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
""" Some helpers used by cc-server.
"""
from threading import Lock
class Acquires(object):
""" Context manager used to acquire more than one lock at once. It works
rely on the fact that if locks are always acquired in the same order,
we can't enter in a deadlock situation.
Usage is very simple:
>>> a = Lock()
>>> b = Lock()
>>> with Acquires(b, a):
... print 'locked'
...
.. seealso::
http://dabeaz.blogspot.com/2009/11/python-thread-deadlock-avoidance_20.html
"""
def __init__(self, *locks):
self._locks = sorted(set(locks), key=lambda x: id(x))
def __enter__(self):
for lock in self._locks:
lock.acquire()
def __exit__(self, exc_type, exc_value, traceback):
for lock in self._locks:
lock.release()
class AcquiresAllOrNone(Acquires):
""" Class that extend Acquires to allow to release all lock if one of
them is not free.
"""
# Global acquire lock:
acquirelock = Lock()
def __enter__(self):
while True:
with self.acquirelock:
acquired = []
for lock in self._locks:
if not lock.acquire(False):
for lock_acquired in acquired:
lock_acquired.release()
break
else:
acquired.append(lock)
else:
break
def itercounter(iterator, callback, *args, **kwargs):
""" Iterate over an iterator and call callback with number of iterations.
"""
count = 0
for item in iterator:
count += 1
yield item
callback(count, *args, **kwargs)
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/doc/ 0000775 0000000 0000000 00000000000 12545176055 0021516 5 ustar 00root root 0000000 0000000 cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/doc/Makefile 0000664 0000000 0000000 00000006074 12545176055 0023165 0 ustar 00root root 0000000 0000000 # Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest
help:
@echo "Please use \`make ' where is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/cc-server.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/cc-server.qhc"
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/doc/api.rst 0000664 0000000 0000000 00000001376 12545176055 0023030 0 ustar 00root root 0000000 0000000 Server library API
==================
.. automodule:: ccserver
ccserver module
---------------
.. automodule:: ccserver.ccserver
:members:
:inherited-members:
tql module
----------
.. automodule:: ccserver.tql
:members:
:inherited-members:
objectsdb module
----------------
.. automodule:: ccserver.objectsdb
:members:
:inherited-members:
conf module
------------
.. automodule:: ccserver.conf
:members:
:inherited-members:
handlers module
---------------
.. automodule:: ccserver.handlers
:members:
:inherited-members:
orderedset module
-----------------
.. automodule:: ccserver.orderedset
:members:
:inherited-members:
utils module
------------
.. automodule:: ccserver.utils
:members:
:inherited-members:
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/doc/cc-addaccount.rst 0000664 0000000 0000000 00000002221 12545176055 0024735 0 ustar 00root root 0000000 0000000 ===============
cc-addaccount
===============
----------------------------------------------------
A tool to create account on your cc-server directory
----------------------------------------------------
:Author: Antoine Millet
:Manual section: 1
SYNOPSIS
========
cc-addaccount [options]
DESCRIPTION
===========
CloudControl is a tool designed to facilitate administration of a wide set of
virtualised or not machines. This binary allow to create account on cc-server
account directory, even if cc-server is not started.
By default, the cc-server account directory is defined as
``/var/lib/cc-server`` (it's also the default for cc-server), but if you
changed it, you can use the ``--directory`` option.
OPTIONS
=======
-h, --help show this help message and exit
-d DIRECTORY, --directory=DIRECTORY
account directory
-p, --password ask for the password
-g, --god add a rule to allow all actions
-c COPY, --copy=COPY copy this already existing account
-r ROLE, --role=ROLE specify the role (default cli)
SEE ALSO
========
* Manual of cc-server ``man 1 cc-server``
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/doc/cc-server.rst 0000664 0000000 0000000 00000001677 12545176055 0024154 0 ustar 00root root 0000000 0000000 ===========
cc-server
===========
------------------------------------------------
Launch the CloudControl server on your computer.
------------------------------------------------
:Author: Antoine Millet
:Manual section: 1
SYNOPSIS
========
cc-server [options]
DESCRIPTION
===========
CloudControl is a tool designed to facilitate administration of a wide set of
virtualised or not machines. This binary allow to launch the central server of
this system.
OPTIONS
=======
--version show program's version number and exit
-h, --help show this help message and exit
-c CONFIG, --config=CONFIG configuration file (default: /etc/cc-server.conf)
-d, --daemonize run as daemon and write pid file
-p PID_FILE, --pid-file=PID_FILE pid file (default: /var/run/cc-server.pid)
SEE ALSO
========
* Manual of cc-addaccount: ``man 1 cc-addaccount``
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/doc/conf.py 0000664 0000000 0000000 00000015042 12545176055 0023017 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
#
# cc-server documentation build configuration file, created by
# sphinx-quickstart on Mon Dec 20 16:58:25 2010.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.append(os.path.abspath('../'))
# -- General configuration -----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.pngmath',
'sphinx.ext.ifconfig',]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'cc-server'
copyright = u'2010, Smartjog'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
from cloudcontrol.server import __version__
version = __version__
# The full version, including alpha/beta/rc tags.
release = version
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# " v documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_use_modindex = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'cc-serverdoc'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'cc-server.tex', u'cc-server Documentation',
u'Smartjog', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_use_modindex = True
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'http://docs.python.org/': None}
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/doc/howtos/ 0000775 0000000 0000000 00000000000 12545176055 0023041 5 ustar 00root root 0000000 0000000 cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/doc/howtos/index.rst 0000664 0000000 0000000 00000000125 12545176055 0024700 0 ustar 00root root 0000000 0000000 Howtos
======
Contents:
.. toctree::
:maxdepth: 2
python_interpreter_client
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/doc/howtos/python_interpreter_client.rst 0000664 0000000 0000000 00000003176 12545176055 0031104 0 ustar 00root root 0000000 0000000 Simple client with python interpreter
=====================================
You can easily connect on a CloudControl server with a Python shell for
debugging purpose.
Before to start, make sure you have installed the ``python-sjrpc`` package::
aptitude install python-sjrpc
1. Start a new Python shell
---------------------------
::
Python 2.6.6 (r266:84292, Oct 9 2010, 12:24:52)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
2. Import sjrpc
---------------
>>> from sjrpc.client import SimpleRpcClient
>>> from sjrpc.utils import ConnectionProxy
3. Import socket & SSL modules
------------------------------
>>> import socket
>>> import ssl
4. Create the socket object
---------------------------
>>> sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>>> sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
Wrap it into ssl socket wrapper to enable SSL (mandatory):
>>> sock = ssl.wrap_socket(sock, certfile=None, cert_reqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLSv1)
And connect it:
>>> sock.connect(('ip.add.res.s', 1984))
5. Create the client object
---------------------------
>>> client = SimpleRpcClient(sock)
You can also create a proxy object to easily call remote methods:
>>> p = ConnectionProxy(client)
6. Launch the client event loop
-------------------------------
To keep control on the interpreter, you need to launch the event loop in a
new thread:
>>> import threading
>>> threading.Thread(target=client.run).start()
7. Authenticate & play !
------------------------
>>> p.authentify('amillet', 'jeancloud')
>>> p.list_vm()
[]
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/doc/index.rst 0000664 0000000 0000000 00000000352 12545176055 0023357 0 ustar 00root root 0000000 0000000 Welcome to cc-server's documentation!
=====================================
Contents:
.. toctree::
:maxdepth: 2
api
howtos/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/etc/ 0000775 0000000 0000000 00000000000 12545176055 0021524 5 ustar 00root root 0000000 0000000 cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/etc/cc-server.conf 0000664 0000000 0000000 00000001137 12545176055 0024266 0 ustar 00root root 0000000 0000000 [server]
# Detach the process from it parent:
daemonize = false
# Set the user/group of the process:
user = cc-server
group = cc-server
# Write a pidfile to the specified path:
#pidfile =
# Set the umask of the process:
#umask = 077
# Certificates for the SSL machinery (mandatory):
ssl_key =
ssl_cert =
# Interface to listen:
interface = 0.0.0.0
# Port to listen:
#port = 1984
# Log debug informations (true/false)
#verbosity = false
# The path to the account database (mandatory):
account_db = /var/lib/cc-server/
# Max connection time for cli:
#maxcon=600
# Max idle time for cli:
#maxcon=30
cc-server-f755d686b7dab54ed4e9df69aaea1e4e2f936a67/setup.py 0000664 0000000 0000000 00000004356 12545176055 0022473 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# This file is part of CloudControl.
#
# CloudControl is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# CloudControl is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with CloudControl. If not, see .
from setuptools import setup, find_packages
from distutils.command.build import build
import os
import sys
# Retrieval of version:
from cloudcontrol.server import __version__
ldesc = open(os.path.join(os.path.dirname(__file__), 'README')).read()
class BuildMan(build):
'''
Build command class used by distutil to generate manpages from RST sources
while packaging.
'''
MANPAGES = ('cc-server', 'cc-addaccount')
description = 'Build manual from RSt source'
def run(self):
from docutils.core import publish_file
from docutils.writers import manpage
srcdir = os.path.split(os.path.abspath(__file__))[0]
for man in self.MANPAGES:
publish_file(source_path=os.path.join(srcdir, 'doc/%s.rst' % man),
destination_path=os.path.join(srcdir, '%s.1' % man),
writer=manpage.Writer())
build.sub_commands.insert(0, ('build_man', None))
cmdclass = {'build_man': BuildMan}
setup(
name='cc-server',
version=__version__,
description='CloudControl server',
long_description=ldesc,
author='Antoine Millet',
author_email='antoine.millet@smartjog.com',
license='LGPL3',
packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),
scripts=['bin/cc-server', 'bin/cc-addaccount'],
namespace_packages=['cloudcontrol'],
data_files=(
('/etc/', ('etc/cc-server.conf',)),
),
classifiers=[
'Operating System :: Unix',
'Programming Language :: Python',
],
cmdclass=cmdclass
)