pax_global_header 0000666 0000000 0000000 00000000064 12131501173 0014504 g ustar 00root root 0000000 0000000 52 comment=7044e2062fed984cf91509b9d689493b5543a3f1
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/ 0000775 0000000 0000000 00000000000 12131501173 0021332 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/.gitignore 0000664 0000000 0000000 00000000037 12131501173 0023322 0 ustar 00root root 0000000 0000000 *.pyc
.*.swp
*~
TODO
test
tags
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/AUTHORS 0000664 0000000 0000000 00000000117 12131501173 0022401 0 ustar 00root root 0000000 0000000 * Sébastien LUTTRINGER
* Matthieu GONNET
* Aurélien DUNAND
* Nicolas DELVAUX
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/COPYRIGHT 0000664 0000000 0000000 00000000122 12131501173 0022620 0 ustar 00root root 0000000 0000000 Copytight © 2011-2012 Smartjog S.A.
Copyright © 2011-2012 Sébastien Luttringer
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/DEPENDENCIES 0000664 0000000 0000000 00000000275 12131501173 0023107 0 ustar 00root root 0000000 0000000 Mandatory
=========
- python (>= 2.6)
- python-psutil (>= 0.2.1)
- python-progressbar (>= 2.3)
- python-argparse (>= 1.2.1) [< python 2.7]
- tar
- gzip
Optional
========
- python-paramiko
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/LICENSE 0000664 0000000 0000000 00000016743 12131501173 0022352 0 ustar 00root root 0000000 0000000 GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/MAINTAINERS 0000664 0000000 0000000 00000000026 12131501173 0023025 0 ustar 00root root 0000000 0000000 Sébastien Luttringer
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/Makefile 0000664 0000000 0000000 00000003550 12131501173 0022775 0 ustar 00root root 0000000 0000000 #!/usr/bin/make
# Installsystems - Python installation framework
# Copyright © 2011-2012 Smartjog S.A
# Copyright © 2011-2012 Sébastien Luttringer
#
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
.PHONY: all tar deb clean cleanbuild buildd dsc doc
NAME=installsystems
VERSION=$(shell sed -rn 's/version = "([^"]+)"/\1/p' installsystems/__init__.py)
BUILD_DIR=__build__
DISTRO=squeeze
all:
echo all is better than nothing
$(NAME)-$(VERSION).tar.gz:
git archive --prefix=$(NAME)-$(VERSION)/ HEAD | gzip -9 > $(NAME)-$(VERSION).tar.gz
tar: cleantar $(NAME)-$(VERSION).tar.gz
doc:
cd doc && make html
dsc: cleanbuild $(NAME)-$(VERSION).tar.gz
mkdir $(BUILD_DIR)
tar xfC $(NAME)-$(VERSION).tar.gz $(BUILD_DIR)
cd $(BUILD_DIR) && dpkg-source -I -b $(NAME)-$(VERSION)
deb: cleanbuild $(NAME)-$(VERSION).tar.gz
mkdir $(BUILD_DIR)
tar xfC $(NAME)-$(VERSION).tar.gz $(BUILD_DIR)
cd $(BUILD_DIR)/$(NAME)-$(VERSION) && dpkg-buildpackage --source-option=-I -us -uc
buildd: dsc
chmod 644 $(BUILD_DIR)/$(NAME)_*.dsc $(BUILD_DIR)/$(NAME)_*.gz
scp $(BUILD_DIR)/$(NAME)_*.dsc $(BUILD_DIR)/$(NAME)_*.gz incoming@buildd.fr.lan:$(DISTRO)
clean: cleantar cleanbuild
cleanbuild:
-rm -rf $(BUILD_DIR)
cleantar:
-rm -f $(NAME)-*.tar.gz
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/README 0000664 0000000 0000000 00000000706 12131501173 0022215 0 ustar 00root root 0000000 0000000 InstallSystems Next Generation
INSTALLSYSTEMS VERSIONNING
__________________________
A valid version is an integer without dot.
A version n, may be followed by a ~, to indicate it's inferior to n
A version n, may be followed by a +, to indicate it's superior to n
Any following chars after ~ or + are ignored
Examples:
1 < 2
2 > 2~dev
2 < 2+dev
2~dev < 2+dev
IMAGES VERSIONNING
__________________
A valid version is an integer. Nothing more! installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/RELEASE 0000664 0000000 0000000 00000000662 12131501173 0022341 0 ustar 00root root 0000000 0000000 # stable release
1) edit version in installsystems/__init__.py
2) edit debian/changelog
3) commit
4) tag
5) make tar
6) upload tarball to forge
7) make builld
# rc release
1) edit version in installsystems/__init__.py
2) edit debian/changelog
3) commit
4) make DISTRO=sid tar buildd
# squeeze backport
1) create a squeeze branch
2) add ~squeeze0 in version in debian/changelog
4) make VERSION=3~squeeze0 DISTRO=squeeze tar buildd
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/bin/ 0000775 0000000 0000000 00000000000 12131501173 0022102 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/bin/is 0000775 0000000 0000000 00000075036 12131501173 0022456 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
'''
InstallSystems Command line Tool
'''
import os
import time
import datetime
import re
import fnmatch
import warnings
import argparse
import psutil
import socket
import sys
import locale
import installsystems
import installsystems.printer
import installsystems.tools as istools
from installsystems.exception import *
from installsystems.printer import *
from installsystems.repository import Repository
from installsystems.repository import RepositoryManager
from installsystems.repository import RepositoryConfig
from installsystems.image import PackageImage, SourceImage
from installsystems.config import MainConfigFile, RepoConfigFile
################################################################################
# Common functions
################################################################################
def load_repositories(args):
'''
Load repositories on a repository manager
'''
# remove cache is asked
if args.no_cache:
args.cache = None
# split filter and search in list
args.repo_filter = Repository.split_repository_list(args.repo_filter)
args.repo_search = Repository.split_repository_list(args.repo_search)
# init repo cache object
repoman = RepositoryManager(args.cache, timeout=args.repo_timeout or args.timeout,
filter=args.repo_filter, search=args.repo_search)
# register repositories (order matter)
# load repo configs from command line
if args.repo_path != "":
repoman.register(RepositoryConfig(istools.smd5sum(args.repo_path)[:8],
path=args.repo_path), temp=True,
nosync=args.no_sync)
# load repo configs from config
for repoconf in RepoConfigFile(args.repo_config).repos:
repoman.register(repoconf, nosync=args.no_sync)
return repoman
def get_images(patterns, repoman, local=True, min=None, max=None):
'''
Select and load a package image from a standard naming type
Allowed type are a direct filename on filesystem
or [repo/]image[:version]
Return the repository as second argument
'''
ans = []
for pattern in patterns:
# check if image is a local file
if local and istools.isfile(pattern) and os.path.isfile(pattern):
ans.append((pattern, None))
else: # we need to find image in a repository
ans += sorted(repoman.select_images([pattern]).items())
# check selected images cound
if min is not None and len(ans) < min:
raise ISError(u"%s images found. Should be at least %s" % (
len(ans), min))
# check max selected images
if max is not None and len(ans) > max:
raise ISError(u"Too many selected images: %s. Max is %s" % (
", ".join([n[0] for n in ans]), max))
for item in ans:
if item[1] is None:
yield PackageImage(item[0]), None
else:
r = item[1]
yield repoman[r["repo"]].get(r["name"], r["version"]), repoman[r["repo"]]
################################################################################
# Commands functions
################################################################################
def c_add(args):
'''
Add packaged images into a repository
'''
repoman = load_repositories(args)
repo = repoman[args.repository]
for image in args.path:
pkg = PackageImage(image)
repo.add(pkg, delete=not args.preserve)
def c_build(args):
'''
Build a source image in the current directory
'''
for path in args.paths:
arrow("Build %s" % path)
# chdir inside path if --chdir
if args.chdir:
cwd = os.getcwdu()
os.chdir(path)
path = "."
arrowlevel(1)
# build start time
t0 = time.time()
# load source image
simg = SourceImage(path)
# do the job
simg.build(force=args.force, force_payload=args.payload,
check=not args.no_check, script=not args.no_script)
# compute building time
t1 = time.time()
dt = int(t1 - t0)
arrow(u"Build time: %s" % datetime.timedelta(seconds=dt))
if args.chdir:
os.chdir(cwd)
arrowlevel(-1)
def c_cat(args):
'''
Display files inside a packaged image
'''
repoman = load_repositories(args)
image, repo = next(get_images([args.pattern], repoman, min=1, max=1))
for filename in args.file:
image.cat(filename)
def c_changelog(args):
'''
Display changelog of packaged images
'''
repoman = load_repositories(args)
for image, repo in get_images(args.pattern, repoman, min=1):
image.changelog.show(int(image.version), args.all_version)
def c_check(args):
'''
Sanity checks on repositories
'''
repoman = load_repositories(args)
for reponame in args.repository:
repoman[reponame].check()
def c_chroot(args):
'''
Helper to go cleanly inside a chroot
'''
istools.chroot(args.path, shell=args.shell, mount=not args.no_mount)
def c_clean(args):
'''
Remove unreferenced files from repositories
'''
repoman = load_repositories(args)
for reponame in args.repository:
repoman[reponame].clean(args.force)
def c_copy(args):
'''
Copy an image from a repository to another one
'''
repoman = load_repositories(args)
dstrepo = repoman[args.repository]
todo = list(get_images(args.pattern, repoman, local=False, min=1))
# check user really want to this
if not args.force:
out("You will copy the following images:")
for img, repo in todo:
out(u" %s/%s:%s" % (repo.config.name, img.name, img.version))
out(u"Inside repository: #l##b#%s#R#" % dstrepo.config.name)
if not confirm():
raise ISError("Aborted!")
# copy it for real
for srcimg, srcrepo in todo:
arrow("Copying %s v%s from repository %s to %s" %
(srcimg.name, srcimg.version,
srcrepo.config.name, dstrepo.config.name))
arrowlevel(1)
dstrepo.add(srcimg)
arrowlevel(-1)
def c_del(args):
'''
Remove an image package from a repository
'''
repoman = load_repositories(args)
todo = list(get_images(args.pattern, repoman, local=False, min=1))
# check all source repository are local (need by deletion)
for img, repo in todo:
if not repo.local:
raise ISError("Repository %s is not local. Unable to delete" %
repo.config.name)
# check user really want to this
if not args.force:
out("You will remove the following images:")
for img, repo in todo:
out(u" %s/%s:%s" % (repo.config.name, img.name, img.version))
if not confirm():
raise ISError("Aborted!")
# delete it for real
for img, repo in todo:
arrow("Deleting %s v%s from repository %s" %
(img.name, img.version, repo.config.name))
arrowlevel(1)
repo.delete(img.name, img.version, payloads=not args.preserve)
arrowlevel(-1)
def c_diff(args):
'''
Show difference between two repositories or packaged images
'''
repoman = load_repositories(args)
if args.object[0] in repoman.onlines and args.object[1] in repoman.onlines:
Repository.diff(repoman[args.object[0]], repoman[args.object[1]])
else:
img = get_images(args.object, repoman, min=2, max=2)
img1, repo1 = next(img)
img2, repo2 = next(img)
PackageImage.diff(img1, img2)
def c_extract(args):
'''
Extract a packaged image inside a directory
'''
repoman = load_repositories(args)
for image, repo in get_images([args.pattern], repoman, min=1, max=1):
image.extract(args.path, payload=args.payload, force=args.force,
gendescription=args.gen_description)
def c_get(args):
'''
Get packaged images from repository to current directory
'''
repoman = load_repositories(args)
for image, repo in get_images(args.pattern, repoman, local=False, min=1):
image.download(".", image=not args.no_image, payload=args.payload, force=args.force)
def c_help(args):
'''
Show help
'''
if args.command not in args.subparser.choices:
args.parser.print_help()
else:
args.subparser.choices[args.command].print_help()
def c_info(args):
'''
Display info about packaged images
'''
repoman = load_repositories(args)
for image, repo in get_images(args.pattern, repoman, min=1):
image.show(o_verbose=args.verbose, o_changelog=args.changelog,
o_json=args.json)
def c_init(args):
'''
Initialize an empty repository
'''
repoman = load_repositories(args)
for reponame in args.repository:
repoman[reponame].init()
def c_install(args):
'''
Install a packaged image
'''
# remove old image args
args.install_parser._remove_action(
[d for d in args.install_parser._actions if d.dest == "pattern"][0])
# create a subparser for current image to have a sexy display of args
subparser = args.install_parser.add_subparsers().add_parser(args.pattern)
# select image to install
repoman = load_repositories(args)
image, repo = next(get_images([args.pattern], repoman, min=1, max=1))
# Print setup information
arrow(u"Installing %s v%s" % (image.name, image.version))
# install start time
t0 = time.time()
# run parser scripts with parser parser argument
image.run_parser(parser=subparser)
# call parser again, with extended attributes
arrow("Parsing arguments")
# Catch exception in custom argparse action
try:
args = args.parser.parse_args()
except Exception as e:
raise ISError("Parsing error", e)
# run setup scripts
if not args.dry_run:
image.run_setup(namespace=args)
# compute building time
t1 = time.time()
dt = int(t1 - t0)
arrow(u"Install time: %s" % datetime.timedelta(seconds=dt))
def c_list(args):
'''
List packaged images in repositories
'''
repoman = load_repositories(args)
if len(args.pattern) == 0 and len(repoman.search) == 0:
args.pattern = ["*/*"]
elif len(args.pattern) == 0:
args.pattern = ["*"]
repoman.show_images(args.pattern, o_long=args.long, o_json=args.json,
o_md5=args.md5, o_date=args.date, o_author=args.author,
o_size=args.size, o_url=args.url,
o_description=args.description)
def c_move(args):
'''
Move packaged image from a repository to another one
'''
repoman = load_repositories(args)
dstrepo = repoman[args.repository]
todo = list(get_images(args.pattern, repoman, local=False, min=1))
# check all source repository are local (need by deletion)
for img, repo in todo:
if not repo.local:
raise ISError("Repository %s is not local. Unable to move" %
repo.config.name)
# check user really want to this
if not args.force:
out("You will copy and remove the following images:")
for img, repo in todo:
out(u" %s/%s:%s" % (repo.config.name, img.name, img.version))
out(u"Inside repository: #l##b#%s#R#" % dstrepo.config.name)
if not confirm():
raise ISError("Aborted!")
# move it for real
for srcimg, srcrepo in todo:
arrow("Moving %s v%s from repository %s to %s" %
(srcimg.name, srcimg.version,
srcrepo.config.name, dstrepo.config.name))
arrowlevel(1)
dstrepo.add(srcimg)
srcrepo.delete(srcimg.name, srcimg.version)
arrowlevel(-1)
def c_new(args):
'''
Create a new source image
'''
SourceImage.create(args.path, args.force)
def c_payload(args):
'''
List payloads
'''
repoman = load_repositories(args)
repoman.show_payloads(args.payload, o_images=args.images, o_json=args.json)
def c_prepare_chroot(args):
'''
Helper to prepare a path to be chrooted
'''
istools.prepare_chroot(args.path, mount=not args.no_mount)
def c_repo(args):
'''
List repositories
'''
# in cleaning mode we doesn't needs to sync repositories
if args.purge:
args.no_sync = True
repoman = load_repositories(args)
if args.purge:
repoman.purge_repositories(args.repository)
else:
repoman.show_repositories(args.repository,
online=args.online, local=args.local,
o_url=args.url, o_state=args.state,
o_json=args.json)
def c_search(args):
'''
Search for packaged images in repositories
'''
repoman = load_repositories(args)
repoman.search_image(args.pattern)
def c_unprepare_chroot(args):
'''
Helper to remove chroot preparation of a path
'''
istools.unprepare_chroot(args.path, mount=not args.no_umount)
def c_version(args):
'''
Display installsystems version
'''
out(installsystems.version)
def arg_parser_init():
'''
Create command parser
'''
# Top level argument parsing
parser = argparse.ArgumentParser()
parser.add_argument("-V", "--version", action="version",
version=installsystems.version)
# exclusive group on verbosity
g = parser.add_mutually_exclusive_group()
g.add_argument("-v", "--verbosity", default=1,
type=int, choices=[0,1,2],
help="define verbosity level (0: quiet, 1:normal, 2:debug)")
g.add_argument("-d", "--debug", dest="verbosity",
action="store_const", const=2,
help="active debug mode")
g.add_argument("-q", "--quiet", dest="verbosity",
action="store_const", const=0,
help="active quiet mode")
# common options
parser.add_argument("-c", "--config", default=u"installsystems",
metavar="PATH", help="config file path")
parser.add_argument("-R", "--repo-config", default=u"repository",
metavar="REPO", help="repository config file path")
parser.add_argument("-s", "--repo-search", default=u"",
metavar="REPO,REPO,...",
help="search for images inside those repositories")
parser.add_argument("-f", "--repo-filter", default=u"",
metavar="REPO,REPO,...",
help="filter repositories by name")
parser.add_argument("-r", "--repo-path", default=u"", metavar="PATH",
help="define a temporary repository")
parser.add_argument("-T", "--repo-timeout", type=int, default=None,
metavar="SECONDS", help="repository access timeout")
parser.add_argument("-C", "--cache", default=u"", metavar="PATH",
help="path of repositories cache")
parser.add_argument("-t", "--timeout", dest="timeout", type=int, default=None,
metavar="SECONDS", help="socket timeout")
parser.add_argument("--no-cache", action="store_true",
help="not use persistent database caching")
parser.add_argument("--no-sync", action="store_true",
help="doesn't sync repository database cache")
parser.add_argument("--no-color", action="store_true",
help="dot not display colored output")
parser.add_argument("--nice", type=int, default=None,
help="nice of the process")
parser.add_argument("--ionice-class", choices=["none","rt", "be","idle"],
help="ionice class of the process (default: none)")
parser.add_argument("--ionice-level", type=int, default=None,
help="ionice class level of the process")
# create a subparser for commands
subparser = parser.add_subparsers()
# add command parser
p = subparser.add_parser("add", help=c_add.__doc__.lower())
p.add_argument("-p", "--preserve", action="store_true",
help="don't remove image after adding to database")
p.add_argument("repository", help="repository where images will be added")
p.add_argument("path", nargs="+", help="local packaged image path")
p.set_defaults(func=c_add)
# build command parser
p = subparser.add_parser("build", help=c_build.__doc__.lower())
p.add_argument("-c", "--no-check", action="store_true",
help="do not check compilation before adding scripts")
p.add_argument("-C", "--chdir", action="store_true",
help="build image inside source image directory, not in current directory")
p.add_argument("-f", "--force", action="store_true",
help="rebuild image if already exists")
p.add_argument("-p", "--payload", action="store_true",
help="rebuild payloads if already exists")
p.add_argument("-s", "--no-script", action="store_true",
help="doesn't execute build script")
p.add_argument("paths", nargs="*", default=u".")
p.set_defaults(func=c_build)
# cat command parser
p = subparser.add_parser("cat", help=c_cat.__doc__.lower())
p.add_argument("pattern", help="path|[repository/]image[:version]")
p.add_argument("file", nargs="+",
help="file inside image to cat (globbing allowed)")
p.set_defaults(func=c_cat)
# changelog command parser
p = subparser.add_parser("changelog", help=c_changelog.__doc__.lower())
p.add_argument("-v", "--all-version", action="store_true",
help="display changelog for all versions")
p.add_argument("pattern", nargs="+", help="path|[repository/]image[:version]")
p.set_defaults(func=c_changelog)
# check command parser
p = subparser.add_parser("check", help=c_check.__doc__.lower())
p.add_argument("repository", nargs="+", help="repositories to check")
p.set_defaults(func=c_check)
# chroot command parser
p = subparser.add_parser("chroot", help=c_chroot.__doc__.lower())
p.add_argument("-m", "--no-mount", action="store_true",
help="disable mounting of /{proc,dev,sys} inside chroot")
p.add_argument("-s", "--shell", default=u"/bin/bash",
help="shell to call inside chroot")
p.add_argument("path")
p.set_defaults(func=c_chroot)
# clean command parser
p = subparser.add_parser("clean", help=c_clean.__doc__.lower())
p.add_argument("-f", "--force", action="store_true",
help="clean repository without confirmation")
p.add_argument("repository", nargs="+", help="repositories to clean")
p.set_defaults(func=c_clean)
# copy command parser
p = subparser.add_parser("copy", help=c_copy.__doc__.lower())
p.add_argument("-f", "--force", action="store_true",
help="copy image without confirmation")
p.add_argument("pattern", nargs="+",
help="[repository/]image[:version]")
p.add_argument("repository", help="destination repository")
p.set_defaults(func=c_copy)
# del command parser
p = subparser.add_parser("del", help=c_del.__doc__.lower())
p.add_argument("pattern", nargs="+",
help="[repository/]image[:version]")
p.add_argument("-f", "--force", action="store_true",
help="delete image without confirmation")
p.add_argument("-p", "--preserve", action="store_true",
help="preserve payloads. doesn't remove it from repository")
p.set_defaults(func=c_del)
# diff command parser
p = subparser.add_parser("diff", help=c_diff.__doc__.lower())
p.add_argument("object", nargs="+",
help="path|repository|[repository/]image[:version]")
p.set_defaults(func=c_diff)
# extract command parser
p = subparser.add_parser("extract", help=c_extract.__doc__.lower())
p.add_argument("-f", "--force", action="store_true",
help="overwrite existing destinations")
p.add_argument("-g", "--gen-description", action="store_true",
help="generate a description file from metadata")
p.add_argument("-p", "--payload", action="store_true",
help="extract payloads")
p.add_argument("pattern",
help="path|[repository/]image[:version]")
p.add_argument("path", help="image will be extracted in path")
p.set_defaults(func=c_extract)
# get command parser
p = subparser.add_parser("get", help=c_get.__doc__.lower())
p.add_argument("-f", "--force", action="store_true",
help="overwrite existing destinations")
p.add_argument("-I", "--no-image", action="store_true",
help="do not get image")
p.add_argument("-p", "--payload", action="store_true",
help="get payloads")
p.add_argument("pattern", nargs="+",
help="[repository/]image[:version]")
p.set_defaults(func=c_get)
# help command parser
p = subparser.add_parser("help", help=c_help.__doc__.lower())
p.add_argument("command", nargs="?", help="command name")
p.set_defaults(func=c_help, parser=parser, subparser=subparser)
# info command parser
p = subparser.add_parser("info", help=c_info.__doc__.lower())
p.add_argument("-c", "--changelog", action="store_true",
help="display image changelog")
p.add_argument("-j", "--json", action="store_true",
help="output is formated in json")
p.add_argument("-v", "--verbose", action="store_true",
help="verbose output")
p.add_argument("pattern", nargs="+",
help="path|[repository/]image[:version]")
p.set_defaults(func=c_info)
# init command parser
p = subparser.add_parser("init", help=c_init.__doc__.lower())
p.add_argument("repository", nargs="+",
help="repository to initialize")
p.set_defaults(func=c_init)
# install command parser
p = subparser.add_parser("install", add_help=False,
help=c_install.__doc__.lower())
p.add_argument("--dry-run", action="store_true",
help="doesn't execute setup scripts")
p.add_argument("pattern", help="path|[repository/]image[:version]")
p.set_defaults(func=c_install, parser=parser, install_parser=p)
# list command parser
p = subparser.add_parser("list", help=c_list.__doc__.lower())
p.add_argument("-A", "--author", action="store_true",
help="display image author")
p.add_argument("-d", "--date", action="store_true",
help="display image date")
p.add_argument("-D", "--description", action="store_true",
help="display image description")
p.add_argument("-j", "--json", action="store_true",
help="output is formated in json")
p.add_argument("-l", "--long", action="store_true",
help="long display")
p.add_argument("-m", "--md5", action="store_true",
help="display image md5")
p.add_argument("-s", "--size", action="store_true",
help="display image size")
p.add_argument("-u", "--url", action="store_true",
help="display image url")
p.add_argument("pattern", nargs="*", default=[],
help="[repository/]image[:version]")
p.set_defaults(func=c_list)
# move command parser
p = subparser.add_parser("move", help=c_move.__doc__.lower())
p.add_argument("-f", "--force", action="store_true",
help="move image without confirmation")
p.add_argument("pattern", nargs="+",
help="[repository/]image[:version]")
p.add_argument("repository", help="destination repository")
p.set_defaults(func=c_move)
# new command parser
p = subparser.add_parser("new", help=c_new.__doc__.lower())
p.add_argument("-f", "--force", action="store_true",
help="overwrite existing source image")
p.add_argument("path", help="new image directory path")
p.set_defaults(func=c_new)
# payload command parser
p = subparser.add_parser("payload", help=c_payload.__doc__.lower())
p.add_argument("-j", "--json", action="store_true",
help="output is formated in json")
p.add_argument("-i", "--images", action="store_true",
help="list images using payload")
p.add_argument("payload", nargs='*', default=[u""],
help="payload md5 pattern")
p.set_defaults(func=c_payload)
# prepare_chroot command parser
p = subparser.add_parser("prepare_chroot",
help=c_prepare_chroot.__doc__.lower())
p.add_argument("-m", "--no-mount", action="store_true",
help="disable mounting of /{proc,dev,sys}")
p.add_argument("path")
p.set_defaults(func=c_prepare_chroot)
# repo command parser
p = subparser.add_parser("repo", help=c_repo.__doc__.lower())
g = p.add_mutually_exclusive_group()
p.add_argument("-j", "--json", action="store_true",
help="output is formated in json")
g.add_argument("-l", "--local", action="store_true", default=None,
help="list local repository (filter)")
g.add_argument("-r", "--remote", action="store_false", dest="local",
help="list remote repository (filter)")
g = p.add_mutually_exclusive_group()
g.add_argument("-o", "--online", action="store_true", default=None,
help="list online repository (filter)")
g.add_argument("-O", "--offline", action="store_false", dest="online",
help="list offline repository (filter)")
p.add_argument("-s", "--state", action="store_true",
help="display repository state (online/offline/local/remote)")
p.add_argument("-u", "--url", action="store_true",
help="display repository url")
p.add_argument("--purge", action="store_true",
help="remove cache databases")
p.add_argument("repository", nargs='*', default=[u"*"], help="repository pattern")
p.set_defaults(func=c_repo)
# search command parser
p = subparser.add_parser("search", help=c_search.__doc__.lower())
p.add_argument("pattern", help="pattern to search in repositories")
p.set_defaults(func=c_search)
# unprepare_chroot command parser
p = subparser.add_parser("unprepare_chroot",
help=c_unprepare_chroot.__doc__.lower())
p.add_argument("-m", "--no-umount", action="store_true",
help="disable unmounting of /{proc,dev,sys}")
p.add_argument("path")
p.set_defaults(func=c_unprepare_chroot)
# version command parser
p = subparser.add_parser("version", help=c_version.__doc__.lower())
p.set_defaults(func=c_version)
# return main parser
return parser
def main():
'''
Program main
'''
try:
# init arg parser
arg_parser = arg_parser_init()
# encode command line arguments to utf-8
try:
args = [ unicode(x, encoding=locale.getpreferredencoding()) for x in sys.argv[1:]]
except UnicodeDecodeError as e:
raise ISError("Invalid character encoding in command line")
# first partial parsing, to get early debug and config path
options = arg_parser.parse_known_args(args=args)[0]
# set early command line verbosity and color
installsystems.verbosity = options.verbosity
installsystems.printer.NOCOLOR = options.no_color
# load main config file options
config_parser = MainConfigFile(options.config, "installsystems")
options = config_parser.parse()
# second partial parsing, command line option overwrite config file
options = arg_parser.parse_known_args(args=args, namespace=options)[0]
# set verbosity and color
installsystems.verbosity = options.verbosity
installsystems.printer.NOCOLOR = options.no_color
# no warning if we are not in debug mode
if installsystems.verbosity < 2:
warnings.filterwarnings("ignore")
# nice and ionice process
if options.nice is not None or options.ionice_class is not None:
proc = psutil.Process(os.getpid())
if options.nice is not None:
try:
proc.nice = options.nice
debug("Setting nice to %d" % options.nice)
except Exception:
warn(u"Unable to nice process to %s" % options.nice)
if options.ionice_class is not None:
try:
ioclassmap = {
"none": psutil.IOPRIO_CLASS_NONE,
"rt": psutil.IOPRIO_CLASS_RT,
"be": psutil.IOPRIO_CLASS_BE,
"idle": psutil.IOPRIO_CLASS_IDLE}
proc.set_ionice(ioclassmap[options.ionice_class], options.ionice_level)
debug(u"Setting ionice to class %s, level %s" %
(options.ionice_class, options.ionice_level))
except Exception:
warn(u"Unable to ionice process to %s" % options.ionice_class)
# set timeout option
if options.timeout is not None:
socket.setdefaulttimeout(options.timeout)
debug("Global timeout setted to %ds" % options.timeout)
# except for install command we parse all args!
# install command is responsible of parsing
if options.func is not c_install:
options = arg_parser.parse_args(args=args, namespace=options)
# let's go
options.func(options)
exit(0)
except UnicodeDecodeError as e:
error("Unable to decode some characters. Check your locale settings.")
except KeyboardInterrupt:
warn("Keyboard Interrupted")
exit(1)
except ISError as e:
error(exception=e)
except Exception as e:
error(u"Unexpected error, please report it with debug enabled", exception=e)
# Entry point
if __name__ == '__main__':
main()
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/completion/ 0000775 0000000 0000000 00000000000 12131501173 0023503 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/completion/bash/ 0000775 0000000 0000000 00000000000 12131501173 0024420 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/completion/bash/is 0000664 0000000 0000000 00000013642 12131501173 0024764 0 ustar 00root root 0000000 0000000 # This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
# list local repositories
_local_repo() {
COMPREPLY=("${COMPREPLY[@]}" $(compgen -W "$(is --quiet --no-color --no-sync repo --local 2>/dev/null)" -- "$cur"))
}
# list all defined repositories
_repo() {
COMPREPLY=("${COMPREPLY[@]}" $(compgen -W "$(is --quiet --no-color --no-sync repo 2>/dev/null)" -- "$cur"))
}
# list all images available in any online repositories
_remote_image() {
COMPREPLY=("${COMPREPLY[@]}" $(compgen -W "$(is --quiet --no-color --no-sync list '*/*:*' 2>/dev/null)" -- "$cur"))
}
# list all local (files) images
_local_image() {
COMPREPLY=("${COMPREPLY[@]}" $(compgen -f -X '!*.isimage' -- "$cur"))
}
# list local and remote images
_image() {
_local_image
_remote_image
}
# generate completion from optional arguments
_opt() {
COMPREPLY=("${COMPREPLY[@]}" $(compgen -W "$*" -- "${cur}"))
}
_is() {
local cur prev cword
local -a cmds opts
COMPREPLY=()
COMP_WORDBREAKS="${COMP_WORDBREAKS//:}"
_get_comp_words_by_ref cur prev cword
_get_first_arg
cmds=('add' 'build' 'cat' 'changelog' 'check' 'chroot' 'clean' 'copy' 'del'
'extract' 'get' 'help' 'info' 'init' 'install' 'list' 'move' 'new' 'repo'
'search' 'version' 'diff' 'payload' 'prepare_chroot' 'unprepare_chroot')
opts=('-h' '--help'
'-V' '--version'
'-v' '--verbosity'
'-d' '--debug'
'-q' '--quiet'
'-R' '--repo-config'
'-f' '--repo-filter'
'-s' '--repo-search'
'-r' '--repo-config'
'-c' '--config'
'-C' '--cache'
'-t' '--timeout'
'--nice'
'--ionice'
'--no-cache'
'--no-color'
'--no-sync')
case "$arg" in
'')
[[ "$cur" == -* ]] && _opt "${opts[@]}" || _opt "${cmds[@]}"
;;
add)
[[ "$cur" == -* ]] && _opt "-h --help -p --preserve" && return 0
_count_args
(( args == 2 )) && _local_repo
(( args > 2 )) && _filedir '?(u)isimage'
;;
build)
[[ "$cur" == -* ]] && _opt '-h --help -f --force -p --payload -c --no-check -s --no-script -C --chdir' && return 0
_count_args
(( args >= 2 )) && _filedir -d
;;
cat)
[[ "$cur" == -* ]] && _opt '-h --help' && return 0
_count_args
(( args == 2 )) && _image
;;
changelog)
[[ "$cur" == -* ]] && _opt '-h --help -v --all-version' && return 0
_image
;;
check)
[[ "$cur" == -* ]] && _opt '-h --help' && return 0
_local_repo
;;
chroot)
[[ "$cur" == -* ]] && _opt '-h --help -m --no-mount -s --shell' && return 0
_filedir -d
;;
clean)
[[ "$cur" == -* ]] && _opt '-h --help -f --force' && return 0
_local_repo
;;
copy)
[[ "$cur" == -* ]] && _opt '-h --help -f --force' && return 0
_count_args
(( args == 2 )) && _remote_image
(( args > 2 )) && _remote_image && _local_repo
;;
del)
[[ "$cur" == -* ]] && _opt '-h --help -f --force -p --preserve' && return 0
_remote_image
;;
diff)
[[ "$cur" == -* ]] && _opt '-h --help' && return 0
_count_args
(( args < 4 )) && _image
;;
extract)
[[ "$cur" == -* ]] && _opt '-h --help -f --force -p --payload -g --gen-description' && return 0
_count_args
(( args == 2 )) && _image
(( args == 3 )) && _filedir -d
;;
get)
[[ "$cur" == -* ]] && _opt '-h --help -f --force --payload -I --no-image' && return 0
_remote_image
;;
help)
_count_args
(( args == 2 )) && _opt "${cmds[@]}"
;;
info)
[[ "$cur" == -* ]] && _opt '-v --verbose -c --changelog' && return 0
_image
;;
init)
[[ "$cur" == -* ]] && _opt '-h --help' && return 0
_local_repo
;;
install)
[[ "$cur" == -* ]] && _opt '--dry-run' && return 0
_count_args
(( args == 2 )) && _image
(( args > 2 )) && _filedir
;;
list)
[[ "$cur" == -* ]] && _opt '-h --help -l --long -j --json -m --md5 -s --size -d --date -A --author -u --url -D --description' && return 0
_remote_image
;;
move)
[[ "$cur" == -* ]] && _opt '-h --help -f --force' && return 0
_count_args
(( args == 2 )) && _remote_image
(( args > 2 )) && _remote_image && _local_repo
;;
new)
[[ "$cur" == -* ]] && _opt '-h --help -f --force' && return 0
_filedir -d
;;
payload)
[[ "$cur" == -* ]] && _opt '-h --help -j --json -i --images' && return 0
;;
prepare_chroot)
[[ "$cur" == -* ]] && _opt '-h --help -m --no-mount' && return 0
_filedir -d
;;
repo)
[[ "$cur" == -* ]] && _opt '-h --help -l --local -r --remote -o --online -O --offline -s --state --force-offline --purge -u --url -j --json' && return 0
_repo
;;
search)
[[ "$cur" == -* ]] && _opt '-h --help' && return 0
;;
unprepare_chroot)
[[ "$cur" == -* ]] && _opt '-h --help -m --no-mount' && return 0
_filedir -d
;;
version)
[[ "$cur" == -* ]] && _opt '-h --help' && return 0
;;
esac
return 0
}
complete -F _is is
# ex: ts=3 sw=3 et filetype=sh
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/completion/zsh/ 0000775 0000000 0000000 00000000000 12131501173 0024307 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/completion/zsh/_installsystems 0000664 0000000 0000000 00000034421 12131501173 0027473 0 ustar 00root root 0000000 0000000 #compdef is
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
# list local repositories
(( $+functions[_installsystems_local_repo] )) ||
_installsystems_local_repo() {
local repos expl
repos=($(_call_program options is --quiet --no-color --no-sync repo --local 2>/dev/null))
_wanted list expl 'local repo' compadd ${expl} - ${repos}
}
# list all defined repositories
(( $+functions[_installsystems_repo] )) ||
_installsystems_repo() {
local repos expl
repos=($(_call_program options is --quiet --no-color --no-sync repo 2>/dev/null))
_wanted list expl 'repo' compadd ${expl} - ${repos}
}
# list all images available in any online repositories
(( $+functions[_installsystems_remote_images] )) ||
_installsystems_remote_images() {
local images expl
images=($(_call_program options is --quiet --no-color --no-sync list "'*/*:*'" 2>/dev/null))
_wanted list expl 'remote image' compadd ${expl} - ${images}
}
# list all images available in any local online repositories
(( $+functions[_installsystems_local_images] )) ||
_installsystems_local_images() {
local localrepos expl images
localrepos=${(j:,:)${$(_call_program options is --quiet --no-color --no-sync repo --local 2>/dev/null)}}
images=($(_call_program options is --quiet --no-color --no-sync -s ${localrepos} -f ${localrepos} list "'*/*:*'" 2>/dev/null))
_wanted list expl 'local image' compadd ${expl} - ${images}
}
# list all local package images
(( $+functions[_installsystems_package_images] )) ||
_installsystems_package_images() {
local expl images
_wanted file expl 'package image' _files -g '*.isimage'
}
# list package and remote images
(( $+functions[_installsystems_images] )) ||
_installsystems_images() {
_installsystems_remote_images
_installsystems_package_images
}
# list installsystems commands
(( $+functions[_is_commands] )) ||
_is_commands() {
local expl
_wanted list expl 'is command' compadd ${expl} - ${_is_cmds%:*}
}
_is() {
setopt extendedglob
typeset -A opt_args
local curcontext="$curcontext" state line expl ret=1
local update_policy
zstyle -s ":completion:*:*:$service:*" cache-policy update_policy
if [[ -z "$update_policy" ]]; then
zstyle ":completion:*:*:$service:*" cache-policy _is_caching_policy
fi
local loglevel='-d --debug -q --quiet -v --verbosity'
_arguments -C \
'(- 1 *)'{-h,--help}'[show this help message and exit]' \
'(- 1 *)'{-V,--version}"[show program's version number and exit]" \
"($loglevel)"{-v+,--verbosity}'[define verbosity level]:verbosity level:((0\:quiet 1\:normal 2\:debug))' \
"($loglevel)"{-d,--debug}'[active debug mode]' \
"($loglevel)"{-q,--quiet}'[active quiet mode]' \
'(-c --config)'{-c,--config}'[config file path]:installsystems config:_files' \
'(-R --repo-config)'{-R,--repo-config}'[repository config file path]:repository config:_files' \
'(-s --repo-search)'{-s,--repo-search}'[search for images inside those repositories]:repository:->repolist' \
'(-f --repo-filter)'{-f,--repo-filter}'[filter repositories by name]:repository:->repolist' \
'(-r --repo-path)'{-r,--repo-path}'[define a temporary repository]:repository directory:_files -/' \
'(-T --repo-timeout)'{-T+,--repo-timeout}'[repository access timeout]:timeout (in second):' \
'(-C --cache)'{-C,--cache}'[path of the repository cache]:cache directory:_files -/' \
'(-t --timeout)'{-t+,--timeout}'[socket timeout]:timeout (in second):' \
'--no-cache[not use persistent database caching]' \
"--no-sync[doesn't sync repository database cache]" \
'--no-color[dot not display colored output]' \
'--nice[nice of the process]:priority:' \
'--ionice-class[ionice class of the process (default: none)]:ionice class:(none rt be idle)' \
'--ionice-level[ionice class level of the process]:ionice level:' \
'(-): :->cmds' \
'(-)*:: :->args' && return
if [[ -n $state ]] && (( ! $+_is_cmds )); then
typeset -gHa _is_cmds
if _cache_invalid is-cmds || ! _retrieve_cache is-cmds; then
_is_cmds=(
${${${(f)${${"$(_call_program commands is --help)"#l#*positional arguments:*{*}}%%optional arguments:*}}/(#s)[[:space:]]#(#b)([-a-z_]##)[[:space:]]##([a-z]##)/$match[1]:$match[2]:l}/ */}
)
_store_cache is-cmds _is_cmds
fi
fi
case $state in
cmds)
_describe -t commands 'installsystems command' _is_cmds
;;
repolist)
local repos
repos=($(_call_program options is --quiet --no-color --no-sync repo))
_values -s , 'repository' $repos && ret=0
;;
args)
local cmd args usage idx
cmd=$words[1]
if (( $#cmd )); then
curcontext="${curcontext%:*:*}:is-${cmd}:"
args=('(- 1 *)'{-h,--help}'[show this help message and exit]')
case $cmd in;
(add)
args+=(
'(-p --preserve)'{-p,--preserve}"[don't remove image after adding to database]"
'1:repository:_installsystems_local_repo'
'*:image path:_installsystems_package_images'
)
;;
(build)
args+=(
'(-c --no-check)'{-c,--no-check}'[do not check compilation before adding scripts]'
'(-C --chdir)'{-C,--chdir}'[build image inside source image directory]'
'(-f --force)'{-f,--force}'[rebuild image if already exists]'
'(-p --payload)'{-p,--payload}'[rebuild payloads if already exists]'
'(-s --no-script)'{-s,--no-script}"[doesn't execute build script]"
'*:image path:_files -/'
)
;;
(cat)
args+=(
'1:image:_installsystems_images'
'*:file (globbing allowed)'
)
;;
(changelog)
args+=(
'(-v --all-version)'{-v,--all-version}'[display changelog for all versions]'
'*:image:_installsystems_images'
)
;;
(check)
args+=(
'*:repository:_installsystems_local_repo'
)
;;
(chroot)
args+=(
'(-m --no-mount)'{-m,--no-mount}'[disable mounting of /{proc,dev,sys} inside chroot]'
'(-s --shell)'{-s,--shell}'[shell to call inside chroot]:shell'
'1:path:_files -/'
)
;;
(clean)
args+=(
'(-f --force)'{-f,--force}'[clean repository without confirmation]'
'*:repository:_installsystems_local_repo'
)
;;
(copy)
args+=(
'(-f --force)'{-f,--force}'[copy image without confirmation]'
'1:image:_installsystems_images'
'*: : _alternative "pattern:image:_installsystems_remote_images" "repo:repository:_installsystems_local_repo"'
)
;;
(del)
args+=(
'(-f --force)'{-f,--force}'[delete image without confirmation]'
'(-p --preserve)'{-p,--preserve}"[preserve payloads. doesn't remove it from repository]"
'*:image:_installsystems_local_images'
)
;;
(diff)
args+=(
'1: : _alternative "pattern:image:_installsystems_images" "repo:repository:_installsystems_repo"'
'2: : _alternative "pattern:image:_installsystems_images" "repo:repository:_installsystems_repo"'
)
;;
(extract)
args+=(
'(-f --force)'{-f,--force}'[overwrite existing destinations]'
'(-g --gen-description)'{-g,--gen-description}'[generate a description file from metadata]'
'(-p --payload)'{-p,--payload}'[extract payloads]'
'1:image: _installsystems_images'
'2:path:_files -/'
)
;;
(get)
args+=(
'(-f --force)'{-f,--force}'[overwrite existing destinations]'
'(-I --no-image)'{-I,--no-image}'[do not get image]'
'(-p --payload)'{-p,--payload}'[get payloads]'
'*:image:_installsystems_remote_images'
)
;;
(help)
args+=(
'1:command:_is_commands'
)
;;
(info)
args+=(
'(-c --changelog)'{-c,--changelog}'[display image changelog]'
'(-j, --json)'{-j,--json}'[output is formated in json]'
'(-v --verbose)'{-v,--verbose}'[verbose output]'
'*:image:_installsystems_images'
)
;;
(init)
args+=(
'*:repository:_installsystems_local_repo'
)
;;
(install)
args+=(
"--dry-run:[doesn't execute setup scripts]"
'1:image:_installsystems_images'
'2:target:_files -/'
)
;;
(list)
args+=(
'(-A --author)'{-A,--author}'[display image author]'
'(-d --date)'{-d,--date}'[display image date]'
'(-D --description)'{-D,--description}'[display image description]'
'(-j --json)'{-j,--json}'[output is formated in json]'
'(-l --long)'{-l,--long}'[long display]'
'(-m --md5)'{-m,--md5}'[display image md5]'
'(-s --size)'{-s,--size}'[display image size]'
'(-u --url)'{-u,--url}'[display image url]'
'*:image:_installsystems_remote_images'
)
;;
(move)
args+=(
'(-f --force)'{-f,--force}'[move image without confirmation]'
'1:image:_installsystems_local_images'
'*: : _alternative "pattern:image:_installsystems_local_images" "repo:repository:_installsystems_local_repo"'
)
;;
(new)
args+=(
'(-f --force)'{-f,--force}'[overwrite existing source image]'
'1:path:_files -/'
)
;;
(payload)
args+=(
'(-j --json)'{-j,--json}'[output is formated in json]'
'(-i --images)'{-i,--images}'[list images using payload]'
'*:payload (checksum)'
)
;;
(prepare_chroot)
args+=(
'(-m --no-mount)'{-m,--no-mount}'[disable mounting of /{proc,dev,sys}]'
'1:path:_files -/'
)
;;
(repo)
args+=(
'(-j --json)'{-j,--json}'[output is formated in json]'
'(-l --local)'{-l,--local}'[list local repository (filter)]'
'(-r --remote)'{-r,--remote}'[list remote repository (filter)]'
'(-o --online)'{-o,--online}'[list online repository (filter)]'
'(-O --offline)'{-O,--offline}'[list offline repository (filter)]'
'(-s --state)'{-s,--state}'[display repository state (online/offline/local/remote)]'
'(-u --url)'{-u,--url}'[display repository url]'
'--purge[remove cache databases]'
'*:repo:_installsystems_repo'
)
;;
(search)
args+=(
'1:search pattern'
)
;;
(unprepare_chroot)
args+=(
'(-m --no-umount)'{-m,--no-umount}'[disable unmounting of /{proc,dev,sys}]'
'1:path:_files -/'
)
;;
esac
_arguments -s -w "$args[@]" && ret=0
else
_message "unknown is command: $words[1]"
fi
;;
esac
return ret
}
_is_caching_policy() {
[[ =$service -nt $1 ]]
}
_is "$@"
# ex: ts=3 sw=3 et filetype=zsh
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/ 0000775 0000000 0000000 00000000000 12131501173 0022554 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/changelog 0000664 0000000 0000000 00000044772 12131501173 0024444 0 ustar 00root root 0000000 0000000 installsystems (9) stable; urgency=low
Aurélien Dunand (14):
Fix search command
Fix typo in extract function
Fix unicode issue when generate description file
Add build script in image tarball
Add source image in build scripts context
Fix arrowlevel in source image build
Fix typos
Fix format issue in printer.info()
Add a function to render all templates in a directory according to an e...
Add zsh completion
Allow to build multiple images with build command
Enable extentedglob for zsh completion
Fix typo in Debian package description
Fix SyntaxError due to encoding
Nicolas Delvaux (2):
Fix a typo in 'is help', about 'is clean' ("repoistory")
Typos: "mout" was used instead of "mount" in several parts of the project.
Sébastien Luttringer (18):
description, changelog and config files must be encoded in utf-8
string interpolation must be done in unicode
command line argument are in unicode
convert printed message regarding current locale
catch unicode decoding error
fix repo search doesn't be a list
fix python2.6 doesn"t allow keywords args for str.encode
(bis) fix python2.6 doesn"t allow keywords args for str.encode
add vim modeline inside zsh comp
add -C option to build
ensure changelog data are utf8
display changelog line in one shot
tarball interface encode check encoding is in UTF8
add autors file
Add copyright and gplv2 license header
ignore test directory and TODO file
add Nicolas Delvaux to authors
add rc release instructions
-- Sébastien Luttringer Thu, 07 Jun 2012 12:01:31 +0200
installsystems (8) stable; urgency=low
Matthieu Gonnet (1):
add --force option to clean command
Sebastien Luttringer (34):
add tools used on Smartjog builder and repository
fix typo in call of istools.time_rfc2822
add json output to info command
error printing end by a dot
change help string of command
move split_image_path and split_repository_list in Repository
image selection rewrite
better check before move image
add -f to copy and better copying plan confirmation
del command ask globally before remove
merge script listing in function select_scripts
Add build scripts
add global --nice and --ionice options
add --dry-run option to install command
fix remaining use of search outside repoman
move show_images into RepositoryManager
add payload command
move smartjog tools in anothers repository
gzip file doesn't store old mtime and old filename
add json output to repo command
select_payloads take a list of pattern
select_images take a list of pattern
fix extract command traceback
get_images must return repository as 2nd value
add dependency tracking file
Use external compressor and archiver
add python as debinan build deps
remove tar and gzip dependencies
fix accent on my first name
check download result before compressor/archiver
fix build command always rebuild payloads files
allow ionice level to be setted
no_cache is not optional
rewrite timeout code
-- Sebastien Luttringer Fri, 16 Mar 2012 15:58:32 +0100
installsystems (7) unstable; urgency=low
Sebastien Luttringer (1):
remove forgotten print
-- Sebastien Luttringer Fri, 23 Dec 2011 14:05:33 +0100
installsystems (6) unstable; urgency=low
Sebastien Luttringer (65):
return code is != 0 on error or keyboard interrupt
check command fail if cwd is not the repository path
replace double by simple quote in bash_completion
also mount /dev/shm inside chroot
chroot preparation doesn't mount on a mountpoint
error already exit with an error code
We don't need to ship argparse
Fix gzip payload failure on python 2.6
check repository name validity
add command repo
clean command options
new is list command
introduce list --no-sync option
add new command prepare_chroot and unprepare_chroot
prepare and unprepare chroot create resolf.conf only if /etc exists
prepare and unprepare enhancement
introduce ~ and + in installsystems versions
list command display md5 in line
add version field to repository
build now create payload without version
reorder option alphabeticall
add option -p to build
better error message in build comman
add a local state to repositories
new display and option for command repo
create a function to list all images in repoman
charachers avec [~-+] in version can be more than ascii
fix bug of color substituion
list only last version of one image by repo by default
install use a subparser for install scripts
Image are not searched in search path
add maintainers file
add -S, --search option to list command
add sphinx documentation
move display functions in binary
We need a recent version of python-argparse
fix missing parameter in arrow function
add --purge to command repo
add istrick file in {,un}prepare_chroot functions
reorder options
add is_version in tools
remove one check of image and payloads when adding to repo
fix repoman get return repo name on not repo object
fix displaying color with --no-color
fix typos
reindent bash completion with spaces
cat command display a warning if no file is selected by pattern
doesn't display directory with cat command
only use functions inside IS binary
add completion for {,un}prepare_chroot and -s
introduce repo and database strict versionning
add,check,clean,copy,init,move completion limit to local repositories
build command correclty handle ctrl+c and kill -15
list search on name if no / is present in pattern
repository name checking moved inside reposiory class
rename split_repositories into split_repository_list
list filter by search path if set
move --no-sync as program optio
cachify doesn't create empty file when failing to get database
remove --force-offline repo option
rewrite argument and config options loading
let argparse display version message
replace quiet and debug by verbosity
move python-progressbar from installsystems
image and database format use now x.y format
-- Sebastien Luttringer Fri, 23 Dec 2011 11:36:29 +0100
installsystems (5) unstable; urgency=low
Matthieu Gonnet (2):
is extract -D generate a description file from the description.json file
Fix typo
Sebastien Luttringer (61):
Introduce PipeFile in place of uopen
fix database init
Implement ssh transport
Repository add now user PipeFile
Add isversion inside images
Display progress bar during image creatio
use PipeFile.consume() in place of shutil.copyfileobj
Add changelog to images
Introduce changelog
Fix invalid payload namming during download
Offline state is now settable in repository.conf
Paramiko is not a hard dependency.
Rollback to python 2.6
Doesn't display warning when we are not in debug mode
fix missing progressbar package inside setup.py
Fix issue with utf8 encoding in author
fix extracting payload with utf8 filenames
fix is new traceback
allow format,description.json,changelog to be writable by user
is list display changelog only with -c or -v option
is new doesn't overwrite by default
Add command changelog
fix repo.show doesn't have changelog parameter
is list can display repository content with full path
add --no-color option to is
fix loading of parameters from config files
Reintroduce commit 184b64a23084a75513d48d336470f8048dd68b60
repository check now display corrupted files
add a minimal installsystems version option
rename isversion into is_build_version
image show display is_min_version in verbose mode
doesn't display changelog in list if no -c args in verbose mode
fix bad is_min_version syntax
fix changelog display which leads to a traceback
fix temporary caching of command line repostories
fix typo in compare_versio
add chroot command
add command info
fix typo
remove double definition of last in repository
fix presence of is_min_version during read_metadata
Add bash-completion
add long --all-version to -v in changelog
no progressbar in quiet mode
fix regex and is call in bash-completion
build command check and add script in alpha order
is get can now download only payloads
script without execution bit are not added in image
fix bash completion with positional argument and options
Allow del command to keep payload inside repository
use filepath and not filename in error in check_script
install bash comp list file after image
split tools' chroot in 3 functions
debian chroot hellper policy-rd needs to be executable
add resolv.conf copying in prepare chroot tricks
policy-rc.d must exit 101 in chroot
Add a download size information on download progressbar
fix error when chrooting inside a root without /etc
improve human_size computation
display time in rfc2822 format
remove image.id and use image.filename instead
-- Sebastien Luttringer Thu, 17 Nov 2011 12:56:57 +0100
installsystems (4) unstable; urgency=low
Aurélien Dunand (13):
Fix typos and docstring bad format
Fix missing parenthesises
add command takes several images files in one time
Remove unused import
help command can now displays subparsers' help
Reorder subparsers as the same order of their function
Add Unix globbing capability for repo_filter
Add search command
Cosmetics changes for list command
Add image retrieval from ftp
Copy file from http without read it in one time
Add copy command
Refactoring code for select one repository
Matthieu Gonnet (2):
Add clean command
Add check command
Seblu (66):
bump version
update release file
fix copy implementation.
fix use of non existant parameter keep
Add a move command
fix displaying of unknown command by help command
Remove update command
searching is now handled in repomananger
remplace pathtype == file by isfile
remove unneeded affectation
command add now take destination repository as parameter
fix type in new command help
repository can be unaivailable
Introducing a new way of selecting image
reorder alphabetically command name in is
allow version to be prefixed by v
Rewrite command cat
Command move use new style
better del asking implementation
Command copy use new image syntax
fix typo in tarball.py
command install use new image syntax
Remove select_one_repository. No more needed.
stricter check during image selection
list now use new syntax spec
reorder copy and move argument
temporary repo now have a unique identifier
repository on command line are added to the of repostiory in config file not supersede
Add extract command
Add command get
Smarter display during image loading
fix aurelien misunderstood epoch time
fix bug of data added in description.json
Fix unable to extract file from a remote directory
Fix extraction of tarball with non existant uid/gid
fix one lintian error from buildd message
fix aurelien misunderstood epoch time in repository
use istools.isfile instead of istools.pathtype == "file"
move repo filter inside repomanager
more precise message when image is not found
introduce offline repository mode
command list show offline status
reorder RepoManager and Repoconfig classes
fix again adunand gmttime issue
offline repository is no more used to get/search an image
Image download from repository is now handled by repository class
improve is help messages and options
fix missing gzip module in image.py
add a best mode to commands using image selection syntax
implement globbing on repository name in list
introduce color shortcut
introduce ask and confirm function
better error message when image file is invalid
improve tarball loading and check md5 from repository
list repo matching only on onlines repositories
set offline to false after creating a repository
add timeout to uopen
image name can use - and _
enforce image name and version checking
fix missing import sys in tarball
better display during repo cleaning
add a comment about python upstream bug
repo cleaning warn and not raise an error if it cannot delete a file
Print color only on a tty
Change clean command message
Add diff command
-- Sebastien Luttringer Thu, 01 Sep 2011 14:06:21 +0200
installsystems (3) unstable; urgency=low
Aurélien Dunand (7):
Add function to display packaged image content
Add function to display repository content
Add list command
Add cat command
Fix missing colon
cat command can display several files
Add update command
Seblu (17):
update release method
version 3~dev0
clean tar remove all tarball, not only current version
add version command
add help command
fix missing config files in debian package
fix typo
fix some python header
fix bad detection of destination file type during extraction
add emacs python header
change display of cat command
fix arrow spacing
change list behaviour
more spacing in list detail mode
check args boundary on command list
fix bad cache loading
more info before displaying content
-- Sebastien Luttringer Tue, 26 Jul 2011 12:28:49 +0200
installsystems (2) unstable; urgency=low
Aurélien Dunand (3):
Remove unused import
Fix typo in var name
Add human_size function
Seblu (24):
bump v2 devel
make clean remove tarball
installsystems package depends on same version of python-installsystems
Fix docstring bad format
Remove original file after adding to db
repoman is always used to handle repositories from is
Update argparse to python 2.7.2
add gzip to installsystems because python 2.6 doesn't have mtime optio
add isfile method
local repository are not cached by repomanager
init image and init repo splited in new and init command
remove exception in subcommand
init repository must not use load_repositories
Revert "Update argparse to python 2.7.2"
Python 2.7 is a minium requirement
setup install config files
make tar always recreate tarball
make deb doesn't sign package
fix debian dependancy to python2.7
Asking help with install display help including options from image
Fix extraction of broken tarball
Payloads are now created like --numeric-owner with tar
debian control needs an explicit python version
let make buildd distro be choose when calling make
-- Sebastien Luttringer Mon, 18 Jul 2011 13:01:31 +0200
installsystems (1) unstable; urgency=low
Aurélien Dunand (3):
Fix typos
Config filename must be a file
Missing parenthesises
Seblu (66):
Initial Commit
Add isimage binary and its module classes
Add isrepo and update common classes
Add isinstall and update common classes
isinstall update local cache only with -u
installsystems version is option -V not -v
isinstall can install local package
Introduce new module tools
Better detection of image name type
move md5sum into tools module
Fix bad deferencing in directory in /data of image
Generic improvment. Too many things to tell.
Fix stupid errors in tools module
Add tarfile from python 2.7
Fix repository deletion
Add config file
Add image data when running scripts
Remove pyxdg dependency
Allow chmod et chown, chgroup in repository config
Adding md5 of script tarball in repository
Improve remote cache. Http transport enabled
Add python 2.7 argparse in installsytems
Add timeout paramter to isinstall
Add configuration samples
isimage doesn't fail if image directory partially exists
Add required in default parser template
selecting of repo in isrepo is smarter
try..cactch in run scripts
Add extractdata method, which is an helper to extract content of tarball data
Compression is now gzip
script in tarball have now rights 755
Introduce RepositoryManager and extractdata
Donwloading image from repository verify md5
Add debian packaging
Change db format to sqlite3
Extract data compute name correctly
cleaning import
Introduce Payload
Keep global dict between parser and setup
arrow can be called without level and verbose
fix typo
script name must match the following regex \d+-.*\.py
Improve script execution
Improve arrow and printer
Fix downloading of all payloads to compute md5 when it's not needed
Adding script is now done one by one and not recursively
Check scripts syntax (by compiling) before adding
fix bad image file building
function name typo
New configuration style
Delete tarfile. We now don't use feature of last version
Add/Delete package in repository
handle db add in repo add
add message to explain which version is installe
Update makefile to easy publish
fix config file loading
Fix debian packaging
No more custom action to set debug and quiet mode
Merge command line tools into one
add a symlinks to is nammed installsystems
change installsytems default config file
Fix traceback cause to a debug message
fix config loading issue caused by argparse append
install command now use common loading repositories
Fix bad caching selection and disabling
Remove usage on error
-- Sebastien Luttringer Tue, 05 Jul 2011 18:04:07 +0200
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/compat 0000664 0000000 0000000 00000000002 12131501173 0023752 0 ustar 00root root 0000000 0000000 7
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/control 0000664 0000000 0000000 00000001626 12131501173 0024164 0 ustar 00root root 0000000 0000000 Source: installsystems
Section: python
Priority: optional
Maintainer: Sébastien Luttringer
Build-Depends: python (>= 2.6), debhelper (>= 7), python-central (>= 0.6), cdbs (>= 0.4.50), python-setuptools, python-docutils
XS-Python-Version: >= 2.6
Standards-Version: 3.9.1
Package: installsystems
Architecture: all
Depends: ${misc:Depends}, ${python:Depends}, python-installsystems (>= ${source:Version}), python-psutil (>= 0.2.1)
XB-Python-Version: ${python:Versions}
Description: Python2 Installation framework
InstallSystems command line tool
Package: python-installsystems
Architecture: all
Depends: ${misc:Depends}, ${python:Depends}, python-paramiko, python-argparse (>= 1.2.1), python-progressbar (>= 2.3), python-jinja2
XB-Python-Version: ${python:Versions}
Description: Python2 Installation framework - Python2 modules
This package provides InstallSystems Python modules
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/copyright 0000664 0000000 0000000 00000000224 12131501173 0024505 0 ustar 00root root 0000000 0000000 Files: *
Copyright: © 2011 Smartjog - Sébastien Luttringer
License: LGPL-3
See /usr/share/common-licenses/LGPL-3 for a full copy of the license.
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/installsystems.install 0000664 0000000 0000000 00000000017 12131501173 0027240 0 ustar 00root root 0000000 0000000 etc
usr/bin/is
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/installsystems.links 0000664 0000000 0000000 00000000041 12131501173 0026707 0 ustar 00root root 0000000 0000000 usr/bin/is usr/bin/installsystems installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/installsystems.manpages 0000664 0000000 0000000 00000000011 12131501173 0027357 0 ustar 00root root 0000000 0000000 doc/is.1
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/python-installsystems.install 0000664 0000000 0000000 00000000010 12131501173 0030550 0 ustar 00root root 0000000 0000000 usr/lib
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/rules 0000775 0000000 0000000 00000000616 12131501173 0023637 0 ustar 00root root 0000000 0000000 #!/usr/bin/make -f
# -*- makefile -*-
DEB_PYTHON_SYSTEM=pycentral
# Debhelper must be included before python-distutils to use
# dh_python / dh_pycentral / dh_pysupport
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/python-distutils.mk
PYTHON_PACKAGES := is python-installsystems
$(patsubst %,binary-install/%,$(PYTHON_PACKAGES)) ::
dh_pycentral -p$(cdbs_curpkg)
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/source/ 0000775 0000000 0000000 00000000000 12131501173 0024054 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/debian/source/format 0000664 0000000 0000000 00000000004 12131501173 0025261 0 ustar 00root root 0000000 0000000 1.0
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/ 0000775 0000000 0000000 00000000000 12131501173 0022077 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/Makefile 0000664 0000000 0000000 00000011021 12131501173 0023532 0 ustar 00root root 0000000 0000000 # Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest
help:
@echo "Please use \`make ' where is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/InstallSystems.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/InstallSystems.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/InstallSystems"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/InstallSystems"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/api/ 0000775 0000000 0000000 00000000000 12131501173 0022650 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/api/config.rst 0000664 0000000 0000000 00000000141 12131501173 0024643 0 ustar 00root root 0000000 0000000 installsystems.config
=====================
.. automodule:: installsystems.config
:members:
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/api/database.rst 0000664 0000000 0000000 00000000147 12131501173 0025150 0 ustar 00root root 0000000 0000000 installsystems.database
=======================
.. automodule:: installsystems.database
:members:
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/api/image.rst 0000664 0000000 0000000 00000000136 12131501173 0024464 0 ustar 00root root 0000000 0000000 installsystems.image
====================
.. automodule:: installsystems.image
:members:
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/api/index.rst 0000664 0000000 0000000 00000000224 12131501173 0024507 0 ustar 00root root 0000000 0000000 InstallSystems API
==================
.. toctree::
:maxdepth: 2
config
database
image
printer
repository
tarball
tools
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/api/printer.rst 0000664 0000000 0000000 00000000144 12131501173 0025064 0 ustar 00root root 0000000 0000000 installsystems.printer
======================
.. automodule:: installsystems.printer
:members:
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/api/repository.rst 0000664 0000000 0000000 00000000151 12131501173 0025616 0 ustar 00root root 0000000 0000000 installsystems.repository
=====================
.. automodule:: installsystems.repository
:members:
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/api/tarball.rst 0000664 0000000 0000000 00000000144 12131501173 0025022 0 ustar 00root root 0000000 0000000 installsystems.tarball
======================
.. automodule:: installsystems.tarball
:members:
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/api/tools.rst 0000664 0000000 0000000 00000000136 12131501173 0024542 0 ustar 00root root 0000000 0000000 installsystems.tools
====================
.. automodule:: installsystems.tools
:members:
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/conf.py 0000664 0000000 0000000 00000015703 12131501173 0023404 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
#
# InstallSystems documentation build configuration file, created by
# sphinx-quickstart on Fri Dec 2 12:10:48 2011.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('..'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'InstallSystems'
copyright = u'2011, Sébastien Luttringer'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '5'
# The full version, including alpha/beta/rc tags.
release = '5'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# " v documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'InstallSystemsdoc'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'InstallSystems.tex', u'InstallSystems Documentation',
u'Sébastien Luttringer', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'installsystems', u'InstallSystems Documentation',
[u'Sébastien Luttringer'], 1)
]
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/index.rst 0000664 0000000 0000000 00000000713 12131501173 0023741 0 ustar 00root root 0000000 0000000 .. InstallSystems documentation master file, created by
sphinx-quickstart on Fri Dec 2 12:10:48 2011.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to InstallSystems's documentation!
==========================================
Contents:
.. toctree::
:maxdepth: 2
api/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/doc/is.1.rst 0000664 0000000 0000000 00000022573 12131501173 0023414 0 ustar 00root root 0000000 0000000 ==
is
==
--------------
InstallSystems
--------------
:Author: Sébastien Luttringer
:Manual section: 1
SYNOPSIS
========
is [options...] command [args...]
SUMMARY
=======
The InstallSystems is a systems deploy tool. It can easily setup hosts or VMs. This tool gives access to versionned images of systems and configuration scripts. It is a complete solution for mass deployment. It regroups tools for image making, repository management, and easy installing.
You can use InstallSystems with already existent repositories/images or you can make yours using the appropriate is command.
OPTIONS
=======
-h, --help
show an help message and exit
-V, --version
show installsystems version
-v {0,1,2}, --verbosity {0,1,2}
show InstallSystems version
-d, --debug
enable debug mode
-q, --quiet
enable quiet mode
-c *CONFIG*, --config *CONFIG*
define path to configuration file
-R *REPO_CONFIG*, --repo-config *REPO_CONFIG*
define path to repositories configuration file
-s *REPO*, --repo-search *REPO*
search for images inside those repositories
-f *REPO_FILTER*, --repo-filter *REPO_FILTER*
filter repositories by name
-r *REPO_PATH*, --repo-path *REPO_PATH*
define *REPO_PATH* as a temporary repository
-T *SECONDS*, --repo-timeout *SECONDS*
set repositories access timeout to *SECONDS*
--no-cache
do not use persistent database caching
--no-sync
do not sync repository database cache
--no-color
do not display color output
--nice NICE
set the *NICE* value for the process
--ionice-class {none,rt,be,idle}
ionice class of the process (default: none)
--ionice-level IONICE_LEVEL
set the *IONICE_LEVEL* for the process
COMMANDS
========
Please note that you can display specific help messages for all of
these commands by using the --help argument after the command name.
add [-h] [-p] *repository* *image*...
Add a local *image* to a local *repository*.
-p, --preserve
do not remove *image* after adding it to the *repository*
build [-h] [-c] [-C] [-f] [-p] [-s] [*path*]...
Check and build the InstallSystems source image in *path* (by default, in the current directory).
-c, --no-check
do not check scripts compilation
-C, --chdir
build image inside source image directory, not in the current one
-f, --force
overwrite existing images
-p, --payload
overwrite existing payloads
-s, --no-script
do not run build scripts
cat [-h] path|[repository/]\ *image*\ [:version] *file*...
Display one *file* (or more) from *image*. Globbing is allowed for files matching.
changelog [-h] [-v] path|[repository/]\ *image*\ [:version]...
Display the last changelog entry for one *image* (or more).
-v, --all-version
display the whole changelog
check [-h] *repository*
Check a local *repository* for missing, unreferenced and corrupted files.
chroot [-h] [-m] [-s *SHELL*\ ] *path*
Chroot inside *path*. This is especially usefull to update system images. This mounts filesystems (/proc, /sys, /dev, /dev/pts, /dev/shm), modify a few config files (resolv.conf, mtab) and finally executes a shell in your chroot (default: /bin/bash)
-m, --no-mount
disable mounting of /{proc,dev,sys}
-s *SHELL*\ , --shell *SHELL*
shell to call inside the chroot
clean [-h] [-f] *repository*...
Clean-up one local *repository* (or more). This will remove files that are no longer referenced in the repository database.
-f, --force
do not prompt before cleaning
copy [-h] [-f] *image*... *repository*
Copy one *image* (or more) to another local **repository**.
-f, --force
overwrite existing images whithout prompting
del [-h] [-f] [-p] *image*...
Delete one *image* (or more) from its repository.
-f, --force
delete images whithout prompting
-p, --preserve
do not remove payloads from the repository
diff [-h] *object* *object*
Show diff between two repositories or images.
extract [-h] [-f] [-g] [-p] *image* *path*
Extract an InstallSystems *image* into *path*.
-f, --force
overwrite existing destination
-g, --gen-description
generate a description file from metadata
-p, --payload
extract payloads
get [-h] [-f] [-I] [-p] *image*...
Download a remote InstallSystem *image* in current directory.
-f, --force
overwrite existing destination
-I, --no-image
do not get the image (should be combined with -p)
-p, --payload
also get payloads
help [-h]
Show help.
info [-h] [-c] [-j] [-v] *image*...
Display info about one *image* (or more).
-c, --changelog
display *image* changelog
-j, --json
output is formated in json
-v, --verbose
verbose output
init [-h] *repository*...
Create one empty *repository* (or more).
install [--dry-run] *image*
Install *image*. Each *image* may have specific options. Typically, each one will display a list of available options when using the **--help** argument. In case of trouble during the install you should contact the author of the image. You can find this info in its description file.
--dry-run
do not execute setup scripts
list [-h] [-A] [-d] [-D] [-j] [-l] [-m] [-s] [-u] [image...]
List available images. By default, it displays the image name and its repository, ordered by repositories/images/version.
-A, --author
display image author
-d, --date
display image date
-D, --description
display image description
-j, --json
output is formated in json
-l, --long
long display
-m, --md5
display image md5
-s, --size
display image size
-u, --url
display image url
move [-h] [-f] *image*... *repository*
Move one *image* (or more) to another *repository*.
-f, --force
move *image* without confirmation
new [-h] [-f] *path*
Create a new source image in *path*. It creates the base directories (parser, setup, payload) and a description template. Moreover this command creates samples files for setup, parser and changelog. It also set executable rights on scripts.
-f, --force
overwrite existing source image
payload [-h] [-j] [-i] [md5_pattern]...
List available payloads matching *md5_pattern* (Default: match everything)
-j, --json
output is formated in json
-i, --images
list images using payload
prepare_chroot [-h] [-m] *path*
Prepare to chroot in *path*.
-m, --no-mount
disable mounting of /{proc,dev,sys}
repo [-h] [-j] [-l|-r] [-o|-O] [-s] [-u] [--purge] [repository]...
List available repositories. By defaut, only names are displayed.
-j, --json
output is formated in json
-l, --local
list local repositoriez (filter)
-r, --remote
list remote repositories (filter)
-o, --online
list online repositories (filter)
-O, --offline
list offline repositories (filter)
-s, --state
display repository state (online/offline/local/remote)
-u, --url
display repository url
--purge
remove cache databases
search [-h] *pattern*
Search *pattern* in repositories.
unprepare_chroot [-h] [-m] *path*
Remove preparation of a chroot in *path*.
-m, --no-umount
disable unmouting of /{proc,dev,sys}
version [-h]
Print InstallSystems version.
EXAMPLES
========
Setup a real host and then reboot it.
is install debian-smartjog -n bobby.seblu.net --disks /dev/sda --reboot
Create of a new image named foobar.
is new foobar
Build the cdn-fw image
is build ./images/cdn-fw
IMAGES
======
InstallSystems use two kind of images:
**source image**
Each image available in repositories has to be build. The image before building is called a source image. In a source image, there are five directories and two files. Each images make the distinction between scripts and payloads.
build/
Scripts to customize the build process for the image.
parser/
Scripts adding specific options for the image are in this directory.
setup/
The scripts with logical steps of the install are in this directory.
lib/
Python modules which are embeded in image.
payload/
This directory embeds one or more payloads (typically rootfs) for the image.
description
It defines the author, the date and the version of the image.
changelog
The changelog file lists modifications of the image.
**packaged image**
Built images are called packaged images. They are versionned, gzipped and ready to deploy. Like source images, package images still make the difference between scripts and payloads. But it doesn't make difference between build, parser and setup scripts. In fact you will have at least two tarballs:
image_name.isimage
This tarball contains build/, parser/, setup/, description and changelog.
image_name.isdata
This tarball contains one payload from payload/
REPOSITORIES
============
InstallSystems manages images with repositories.
An InstallSystems repository use a SQLite3 database (db), a last file (timestamp of last db modification) and MD5s of images. The repos are reachable by HTTP(S), FTP and SSH. This allows you to easily access images.
Also, please note that you can only modify local repositories.
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/ 0000775 0000000 0000000 00000000000 12131501173 0024430 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/__init__.py 0000664 0000000 0000000 00000001562 12131501173 0026545 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
'''
InstallSystems module
'''
canonical_name="installsystems"
version = "9"
verbosity = 1 # 0: quiet, 1: normal, 2: debug
__all__ = []
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/config.py 0000664 0000000 0000000 00000016703 12131501173 0026256 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
'''
InstallSystems Configuration files class
'''
import codecs
import os
import sys
from argparse import Namespace
from ConfigParser import RawConfigParser
from installsystems.exception import *
from installsystems.printer import *
from installsystems.repository import RepositoryConfig
class ConfigFile(object):
'''
Configuration File base class
'''
def __init__(self, filename):
'''
filename can be full path to config file or a name in config directory
'''
#try to get filename in default config dir
if os.path.isfile(filename):
self.path = os.path.abspath(filename)
else:
self.path = self._config_path(filename)
self.reload()
def reload():
'''
Reload configuration from file
'''
raise NotImplementedError
def _config_path(self, name):
'''
Return path of the best config file
'''
for cf in [ os.path.join(os.path.expanduser(u"~/.config/installsystems/%s.conf" % name)),
u"/etc/installsystems/%s.conf" % name ]:
if (os.path.isfile(cf) and os.access(cf, os.R_OK)):
return cf
return None
class MainConfigFile(ConfigFile):
'''
Program configuration file
'''
valid_options = {
"verbosity": [0,1,2],
"no_cache": bool,
"no_color": bool,
"timeout": int,
"cache": str,
"repo_search": str,
"repo_filter": str,
"repo_config": str,
"repo_timeout": int,
"nice": int,
"ionice_class": ["none", "rt", "be", "idle"],
"ionice_level": int
}
def __init__(self, filename, prefix=os.path.basename(sys.argv[0])):
self.prefix = prefix
ConfigFile.__init__(self, filename)
def reload(self):
'''
Load/Reload config file
'''
self._config = {}
# loading default options
self._config["cache"] = self.cache
# loading config file if exists
if self.path is None:
debug("No main config file to load")
return
debug(u"Loading main config file: %s" % self.path)
try:
cp = RawConfigParser()
cp.read(self.path)
# main configuration
if cp.has_section(self.prefix):
self._config.update(cp.items(self.prefix))
except Exception as e:
raise ISError(u"Unable load main config file %s" % self.path, e)
def parse(self, namespace=None):
'''
Parse current loaded option within a namespace
'''
if namespace is None:
namespace = Namespace()
for option, value in self._config.items():
# check option is valid
if option not in self.valid_options.keys():
warn(u"Invalid option %s in %s, skipped" % (option, self.path))
continue
# we expect a string like
if not isinstance(option, basestring):
raise TypeError(u"Invalid config parser option %s type" % option)
# smartly cast option's value
if self.valid_options[option] is bool:
value = value.strip().lower() not in ("false", "no", "0", "")
# in case of valid option is a list, we take the type of the first
# argument of the list to convert value into it
# as a consequence, all element of a list must be of the same type!
# empty list are forbidden !
elif isinstance(self.valid_options[option], list):
ctype = type(self.valid_options[option][0])
try:
value = ctype(value)
except ValueError:
warn("Invalid option %s type (must be %s), skipped" %
(option, ctype))
continue
if value not in self.valid_options[option]:
warn("Invalid value %s in option %s (must be in %s), skipped" %
(value, option, self.valid_options[option]))
continue
else:
try:
value = self.valid_options[option](value)
except ValueError:
warn("Invalid option %s type (must be %s), skipped" %
(option, self.valid_options[option]))
continue
setattr(namespace, option, value)
return namespace
def _cache_paths(self):
'''
List all candidates to cache directories. Alive or not
'''
dirs = [os.path.expanduser("~/.cache"), "/var/tmp", "/tmp"]
# we have an additional directry if we are root
if os.getuid() == 0:
dirs.insert(0, "/var/cache")
return map(lambda x: os.path.join(x, self.prefix), dirs)
def _cache_path(self):
'''
Return path of the best cache directory
'''
# find a good directory
for di in self._cache_paths():
if (os.path.exists(di)
and os.path.isdir(di)
and os.access(di, os.R_OK|os.W_OK|os.X_OK)):
return di
return None
@property
def cache(self):
'''
Find a cache directory
'''
if "cache" in self._config:
return self._config["cache"]
if self._cache_path() is None:
for di in self._cache_paths():
try:
os.mkdir(di)
break
except Exception as e:
debug(u"Unable to create %s: %s" % (di, e))
return self._cache_path()
class RepoConfigFile(ConfigFile):
'''
Repository Configuration class
'''
def reload(self):
'''
Load/Reload config file
'''
# seting default config
self._config = {}
self._repos = []
# if no file nothing to load
if self.path is None:
return
# loading config file if exists
debug(u"Loading repository config file: %s" % self.path)
try:
cp = RawConfigParser()
cp.readfp(codecs.open(self.path, "r", "utf8"))
# each section is a repository
for rep in cp.sections():
# check if its a repo section
if "path" not in cp.options(rep):
continue
# get all options in repo
self._repos.append(RepositoryConfig(rep, **dict(cp.items(rep))))
except Exception as e:
raise ISError(u"Unable to load repository file %s" % self.path, e)
@property
def repos(self):
'''
Get a list of repository available
'''
# deep copy
return list(self._repos)
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/database.py 0000664 0000000 0000000 00000006404 12131501173 0026552 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
'''
Database stuff
'''
import os
import sqlite3
import installsystems.tools as istools
import installsystems.template as istemplate
from installsystems.tarball import Tarball
from installsystems.exception import *
from installsystems.printer import *
class Database(object):
'''
Abstract repo database stuff
It needs to be local cause of sqlite3 which need to open a file
'''
@classmethod
def create(cls, path):
arrow("Creating repository database")
# check locality
if not istools.isfile(path):
raise ISError("Database creation must be local")
path = os.path.abspath(path)
if os.path.exists(path):
raise ISError("Database already exists. Remove it before")
try:
conn = sqlite3.connect(path, isolation_level=None)
conn.execute("PRAGMA foreign_keys = ON")
conn.executescript(istemplate.createdb)
conn.commit()
conn.close()
except Exception as e:
raise ISError(u"Create database failed", e)
return cls(path)
def __init__(self, path):
# check locality
if not istools.isfile(path):
raise ISError("Database must be local")
self.path = os.path.abspath(path)
if not os.path.exists(self.path):
raise ISError("Database not exists")
self.conn = sqlite3.connect(self.path, isolation_level=None)
self.conn.execute("PRAGMA foreign_keys = ON")
# get database version
try:
r = self.ask("SELECT value FROM misc WHERE key = 'version'").fetchone()
if r is None:
raise TypeError()
self.version = float(r[0])
except:
self.version = 1.0
# we only support database v1
if self.version >= 2.0:
debug(u"Invalid database format: %s" % self.version)
raise ISError("Invalid database format")
# we make a query to be sure format is valid
try:
self.ask("SELECT * FROM image")
except:
debug(u"Invalid database format: %s" % self.version)
raise ISError("Invalid database format")
def begin(self):
'''
Start a db transaction
'''
self.conn.execute("BEGIN TRANSACTION")
def commit(self):
'''
Commit current db transaction
'''
self.conn.execute("COMMIT TRANSACTION")
def ask(self, sql, args=()):
'''
Ask question to db
'''
return self.conn.execute(sql, args)
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/exception.py 0000664 0000000 0000000 00000005200 12131501173 0026775 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# Installsystems - Python installation framework
# Copyright © 2011-2012 Smartjog S.A
# Copyright © 2011-2012 Sébastien Luttringer
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
'''
InstallSystems Exceptions
'''
import traceback
import sys
class ISException(Exception):
'''
Base exception class
'''
def __init__(self, message="", exception=None):
self.message = unicode(message)
self.exception = None if exception is None else sys.exc_info()
def __str__(self):
'''
Return a description of exception
'''
if self.exception is not None:
return u"%s: %s" % (self.message, self.exception[1])
else:
return self.message
def print_sub_tb(self, fd=sys.stderr):
'''
Print stored exception traceback and exception message
'''
# no exception, do nothing
if self.exception is None:
return
# print traceback and exception separatly to avoid recursive print of
# "Traceback (most recent call last)" from traceback.print_exception
traceback.print_tb(self.exception[2], file=fd)
fd.write("".join(traceback.format_exception_only(self.exception[0], self.exception[1])))
# recursively call traceback print on ISException error
if isinstance(self.exception[1], ISException):
self.exception[1].print_sub_tb()
def print_tb(self, fd=sys.stderr):
'''
Print traceback from embeded exception or current one
'''
from installsystems.printer import out
# coloring
out("#l##B#", fd=fd, endl="")
traceback.print_exc(file=fd)
self.print_sub_tb(fd)
# reset color
out("#R#", fd=fd, endl="")
class ISError(ISException):
'''
Installsystems error; this exception will stop execution
'''
class ISWarning(ISException):
'''
Installsystems warning; this exception do not stop execution
'''
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/image.py 0000664 0000000 0000000 00000136534 12131501173 0026100 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
'''
Image stuff
'''
import codecs
import ConfigParser
import cStringIO
import difflib
import json
import locale
import math
import os
import re
import shutil
import stat
import subprocess
import sys
import tarfile
import time
import installsystems
import installsystems.template as istemplate
import installsystems.tools as istools
from installsystems.exception import *
from installsystems.printer import *
from installsystems.tools import PipeFile
from installsystems.tarball import Tarball
class Image(object):
'''
Abstract class of images
'''
# format should be a float X.Y but for compatibility reason it's a string
# before version 6, it's strict string comparaison
format = "1"
extension = ".isimage"
@staticmethod
def check_image_name(buf):
'''
Check if @buf is a valid image name
'''
if re.match("^[-_\w]+$", buf) is None:
raise ISError(u"Invalid image name %s" % buf)
@staticmethod
def check_image_version(buf):
'''
Check if @buf is a valid image version
'''
if re.match("^\d+$", buf) is None:
raise ISError(u"Invalid image version %s" % buf)
@staticmethod
def compare_versions(v1, v2):
'''
For backward compatibility, image class offer a method to compare image versions
But code is now inside tools
'''
return istools.compare_versions(v1, v2)
def _load_modules(self, lib_list, get_str):
'''
Load python module embedded in image
Return a dict of {module_name: module object}
'''
if not lib_list:
return {}
arrow(u"Load libs")
old_level = arrowlevel(1)
gl ={}
# order matter!
lib_list.sort()
for filename in lib_list:
arrow(os.path.basename(filename))
name = os.path.basename(filename).split('-', 1)[1][:-3]
if name in gl:
error('Module %s already loaded' % name)
# extract source code
try:
code = get_str(filename)
except Exception as e:
raise ISError(u"Extracting lib %s fail: %s" %
(filename, e))
gl[name] = istools.string2module(name, code, filename)
# avoid ImportError when exec 'import name'
sys.modules[name] = gl[name]
arrowlevel(level=old_level)
return gl
class SourceImage(Image):
'''
Image source manipulation class
'''
@classmethod
def create(cls, path, force=False):
'''
Create an empty source image
'''
# check local repository
if not istools.isfile(path):
raise NotImplementedError("SourceImage must be local")
# main path
build_path = os.path.join(path, "build")
parser_path = os.path.join(path, "parser")
setup_path = os.path.join(path, "setup")
payload_path = os.path.join(path, "payload")
lib_path = os.path.join(path, "lib")
# create base directories
arrow("Creating base directories")
try:
for d in (path, build_path, parser_path, setup_path, payload_path,
lib_path):
if not os.path.exists(d) or not os.path.isdir(d):
os.mkdir(d)
except Exception as e:
raise ISError(u"Unable to create directory: %s" % d, e)
# create example files
arrow("Creating examples")
arrowlevel(1)
# create dict of file to create
examples = {}
# create description example from template
examples["description"] = {"path": "description",
"content": istemplate.description % {
"name": "",
"version": "1",
"description": "",
"author": "",
"is_min_version": installsystems.version}}
# create changelog example from template
examples["changelog"] = {"path": "changelog", "content": istemplate.changelog}
# create build example from template
examples["build"] = {"path": "build/01-build.py", "content": istemplate.build}
# create parser example from template
examples["parser"] = {"path": "parser/01-parser.py", "content": istemplate.parser}
# create setup example from template
examples["setup"] = {"path": "setup/01-setup.py", "content": istemplate.setup}
for name in examples:
try:
arrow(u"Creating %s example" % name)
expath = os.path.join(path, examples[name]["path"])
if not force and os.path.exists(expath):
warn(u"%s already exists. Skipping!" % expath)
continue
open(expath, "w").write(examples[name]["content"])
except Exception as e:
raise ISError(u"Unable to create example file", e)
try:
# setting executable rights on files in setup and parser
arrow("Setting executable rights on scripts")
umask = os.umask(0)
os.umask(umask)
for dpath in (build_path, parser_path, setup_path):
for f in os.listdir(dpath):
istools.chrights(os.path.join(dpath, f), mode=0777 & ~umask)
except Exception as e:
raise ISError(u"Unable to set rights on %s" % pf, e)
arrowlevel(-1)
def __init__(self, path):
# check local repository
if not istools.isfile(path):
raise NotImplementedError("SourceImage must be local")
Image.__init__(self)
self.base_path = os.path.abspath(path)
for pathtype in ("build", "parser", "setup", "payload", "lib"):
setattr(self, u"%s_path" % pathtype, os.path.join(self.base_path, pathtype))
self.check_source_image()
self.description = self.parse_description()
self.changelog = self.parse_changelog()
# script tarball path
self.image_name = u"%s-%s%s" % (self.description["name"],
self.description["version"],
self.extension)
def check_source_image(self):
'''
Check if we are a valid SourceImage directories
'''
for d in (self.base_path, self.build_path, self.parser_path,
self.setup_path, self.payload_path, self.lib_path):
if not os.path.exists(d):
raise ISError(u"Invalid source image: directory %s is missing" % d)
if not os.path.isdir(d):
raise ISError(u"Invalid source image: %s is not a directory" % d)
if not os.access(d, os.R_OK|os.X_OK):
raise ISError(u"Invalid source image: unable to access to %s" % d)
if not os.path.exists(os.path.join(self.base_path, "description")):
raise ISError("Invalid source image: no description file")
def build(self, force=False, force_payload=False, check=True, script=True):
'''
Create packaged image
'''
# check if free to create script tarball
if os.path.exists(self.image_name) and force == False:
raise ISError("Tarball already exists. Remove it before")
# check python scripts
if check:
for d in (self.build_path, self.parser_path, self.setup_path,
self.lib_path):
self.check_scripts(d)
# remove list
rl = set()
# run build script
if script:
rl |= set(self.run_scripts(self.build_path, self.payload_path))
if force_payload:
rl |= set(self.select_payloads())
# remove payloads
self.remove_payloads(rl)
# create payload files
self.create_payloads()
# generate a json description
jdesc = self.generate_json_description()
# creating scripts tarball
self.create_image(jdesc)
def create_image(self, jdescription):
'''
Create a script tarball in current directory
'''
# create tarball
arrow("Creating image tarball")
arrowlevel(1)
arrow(u"Name %s" % self.image_name)
try:
try:
tarball = Tarball.open(self.image_name, mode="w:gz", dereference=True)
except Exception as e:
raise ISError(u"Unable to create tarball %s" % self.image_name, e)
# add description.json
arrow("Add description.json")
tarball.add_str("description.json", jdescription, tarfile.REGTYPE, 0644)
# add changelog
if self.changelog is not None:
arrow("Add changelog")
tarball.add_str("changelog", self.changelog.verbatim, tarfile.REGTYPE, 0644)
# add format
arrow("Add format")
tarball.add_str("format", self.format, tarfile.REGTYPE, 0644)
# add build scripts
self.add_scripts(tarball, self.build_path)
# add parser scripts
self.add_scripts(tarball, self.parser_path)
# add setup scripts
self.add_scripts(tarball, self.setup_path)
# add lib
self.add_scripts(tarball, self.lib_path)
# closing tarball file
tarball.close()
except (SystemExit, KeyboardInterrupt):
if os.path.exists(self.image_name):
os.unlink(self.image_name)
arrowlevel(-1)
def describe_payload(self, name):
'''
Return information about a payload
'''
ans = {}
ans["source_path"] = os.path.join(self.payload_path, name)
ans["dest_path"] = u"%s-%s%s" % (self.description["name"],
name,
Payload.extension)
ans["link_path"] = u"%s-%s-%s%s" % (self.description["name"],
self.description["version"],
name,
Payload.extension)
source_stat = os.stat(ans["source_path"])
ans["isdir"] = stat.S_ISDIR(source_stat.st_mode)
ans["uid"] = source_stat.st_uid
ans["gid"] = source_stat.st_gid
ans["mode"] = stat.S_IMODE(source_stat.st_mode)
ans["mtime"] = source_stat.st_mtime
return ans
def select_payloads(self):
'''
Return a generator on image payloads
'''
for payname in os.listdir(self.payload_path):
yield payname
def remove_payloads(self, paylist):
'''
Remove payload list if exists
'''
arrow("Removing payloads")
for pay in paylist:
arrow(pay, 1)
desc = self.describe_payload(pay)
for f in (desc["dest_path"], desc["link_path"]):
if os.path.lexists(f):
os.unlink(f)
def create_payloads(self):
'''
Create all missing data payloads in current directory
Doesn't compute md5 during creation because tarball can
be created manually
Also create symlink to versionned payload
'''
arrow("Creating payloads")
for payload_name in self.select_payloads():
paydesc = self.describe_payload(payload_name)
if os.path.exists(paydesc["link_path"]):
continue
arrow(payload_name, 1)
try:
# create non versionned payload file
if not os.path.exists(paydesc["dest_path"]):
if paydesc["isdir"]:
self.create_payload_tarball(paydesc["dest_path"],
paydesc["source_path"])
else:
self.create_payload_file(paydesc["dest_path"],
paydesc["source_path"])
# create versionned payload file
if os.path.lexists(paydesc["link_path"]):
os.unlink(paydesc["link_path"])
os.symlink(paydesc["dest_path"], paydesc["link_path"])
except Exception as e:
raise ISError(u"Unable to create payload %s" % payload_name, e)
def create_payload_tarball(self, tar_path, data_path):
'''
Create a payload tarball
'''
try:
# get compressor argv (first to escape file creation if not found)
a_comp = istools.get_compressor_path(self.compressor, compress=True)
a_tar = ["tar", "--create", "--numeric-owner", "--directory",
data_path, "."]
# create destination file
f_dst = PipeFile(tar_path, "w", progressbar=True)
# run tar process
p_tar = subprocess.Popen(a_tar, shell=False, close_fds=True,
stdout=subprocess.PIPE)
# run compressor process
p_comp = subprocess.Popen(a_comp, shell=False, close_fds=True,
stdin=p_tar.stdout, stdout=subprocess.PIPE)
# write data from compressor to tar_path
f_dst.consume(p_comp.stdout)
# close all fd
p_tar.stdout.close()
p_comp.stdout.close()
f_dst.close()
# check tar return 0
if p_tar.wait() != 0:
raise ISError("Tar return is not zero")
# check compressor return 0
if p_comp.wait() != 0:
raise ISError(u"Compressor %s return is not zero" % a_comp[0])
except (SystemExit, KeyboardInterrupt):
if os.path.exists(tar_path):
os.unlink(tar_path)
raise
def create_payload_file(self, dest, source):
'''
Create a payload file
'''
try:
# get compressor argv (first to escape file creation if not found)
a_comp = istools.get_compressor_path(self.compressor, compress=True)
# open source file
f_src = open(source, "r")
# create destination file
f_dst = PipeFile(dest, "w", progressbar=True)
# run compressor
p_comp = subprocess.Popen(a_comp, shell=False, close_fds=True,
stdin=f_src, stdout=subprocess.PIPE)
# close source file fd
f_src.close()
# write data from compressor to dest file
f_dst.consume(p_comp.stdout)
# close compressor stdin and destination file
p_comp.stdout.close()
f_dst.close()
# check compressor return 0
if p_comp.wait() != 0:
raise ISError(u"Compressor %s return is not zero" % a_comp[0])
except (SystemExit, KeyboardInterrupt):
if os.path.exists(dest):
os.unlink(dest)
raise
def add_scripts(self, tarball, directory):
'''
Add scripts inside a directory into a tarball
'''
basedirectory = os.path.basename(directory)
arrow(u"Add %s scripts" % basedirectory)
arrowlevel(1)
# adding base directory
ti = tarball.gettarinfo(directory, arcname=basedirectory)
ti.mode = 0755
ti.uid = ti.gid = 0
ti.uname = ti.gname = "root"
tarball.addfile(ti)
# adding each file
for fp, fn in self.select_scripts(directory):
ti = tarball.gettarinfo(fp, arcname=os.path.join(basedirectory, fn))
ti.mode = 0755
ti.uid = ti.gid = 0
ti.uname = ti.gname = "root"
tarball.addfile(ti, open(fp, "rb"))
arrow(u"%s added" % fn)
arrowlevel(-1)
def check_scripts(self, directory):
'''
Check if scripts inside a directory can be compiled
'''
basedirectory = os.path.basename(directory)
arrow(u"Checking %s scripts" % basedirectory)
arrowlevel(1)
# checking each file
for fp, fn in self.select_scripts(directory):
# compiling file
fs = open(fp, "r").read()
compile(fs, fp.encode(locale.getpreferredencoding()), mode="exec")
arrow(fn)
arrowlevel(-1)
def run_scripts(self, script_directory, exec_directory):
'''
Execute script inside a directory
Return a list of payload to force rebuild
'''
arrow(u"Run %s scripts" % os.path.basename(script_directory))
rebuild_list = []
cwd = os.getcwd()
arrowlevel(1)
# load modules
lib_list = [fp.encode(locale.getpreferredencoding())
for fp, fn in self.select_scripts(self.lib_path)]
func = lambda f: open(f).read()
modules = self._load_modules(lib_list, func)
for fp, fn in self.select_scripts(script_directory):
arrow(fn)
os.chdir(exec_directory)
old_level = arrowlevel(1)
# compile source code
try:
o_scripts = compile(open(fp, "r").read(), fn, "exec")
except Exception as e:
raise ISError(u"Unable to compile %s fail" % fn, e)
# define execution context
gl = {"rebuild": rebuild_list,
"image": self}
# add embedded modules
gl.update(modules)
# execute source code
try:
exec o_scripts in gl
except Exception as e:
raise ISError(u"Execution script %s fail" % fn, e)
arrowlevel(level=old_level)
os.chdir(cwd)
arrowlevel(-1)
return rebuild_list
def select_scripts(self, directory):
'''
Select script with are allocatable in a directory
'''
for fn in sorted(os.listdir(directory)):
fp = os.path.join(directory, fn)
# check name
if not re.match("\d+-.*\.py$", fn):
continue
# check execution bit
if not os.access(fp, os.X_OK):
continue
# yield complet filepath and only script name
yield fp, fn
def generate_json_description(self):
'''
Generate a JSON description file
'''
arrow("Generating JSON description")
arrowlevel(1)
# copy description
desc = self.description.copy()
# timestamp image
arrow("Timestamping")
desc["date"] = int(time.time())
# watermark
desc["is_build_version"] = installsystems.version
# append payload infos
arrow("Checksumming payloads")
desc["payload"] = {}
for payload_name in self.select_payloads():
arrow(payload_name, 1)
# getting payload info
payload_desc = self.describe_payload(payload_name)
# compute md5 and size
fileobj = PipeFile(payload_desc["link_path"], "r")
fileobj.consume()
fileobj.close()
# create payload entry
desc["payload"][payload_name] = {
"md5": fileobj.md5,
"size": fileobj.size,
"isdir": payload_desc["isdir"],
"uid": payload_desc["uid"],
"gid": payload_desc["gid"],
"mode": payload_desc["mode"],
"mtime": payload_desc["mtime"]
}
arrowlevel(-1)
# check md5 are uniq
md5s = [v["md5"] for v in desc["payload"].values()]
if len(md5s) != len(set(md5s)):
raise ISError("Two payloads cannot have the same md5")
# serialize
return json.dumps(desc)
def parse_description(self):
'''
Raise an exception is description file is invalid and return vars to include
'''
arrow("Parsing description")
d = dict()
try:
descpath = os.path.join(self.base_path, "description")
cp = ConfigParser.RawConfigParser()
cp.readfp(codecs.open(descpath, "r", "utf8"))
for n in ("name","version", "description", "author"):
d[n] = cp.get("image", n)
# get min image version
if cp.has_option("image", "is_min_version"):
d["is_min_version"] = cp.get("image", "is_min_version")
else:
d["is_min_version"] = 0
# check image name
self.check_image_name(d["name"])
# check image version
self.check_image_version(d["version"])
# check installsystems min version
if self.compare_versions(installsystems.version, d["is_min_version"]) < 0:
raise ISError("Minimum Installsystems version not satisfied")
except Exception as e:
raise ISError(u"Bad description", e)
return d
def parse_changelog(self):
'''
Create a changelog object from a file
'''
# try to find a changelog file
try:
path = os.path.join(self.base_path, "changelog")
fo = codecs.open(path, "r", "utf8")
except IOError:
return None
# we have it, we need to check everything is ok
arrow("Parsing changelog")
try:
cl = Changelog(fo.read())
except Exception as e:
raise ISError(u"Bad changelog", e)
return cl
@property
def compressor(self):
'''
Return image compressor
'''
# currently only support gzip
return "gzip"
class PackageImage(Image):
'''
Packaged image manipulation class
'''
@classmethod
def diff(cls, pkg1, pkg2):
'''
Diff two packaged images
'''
arrow(u"Difference from images #y#%s v%s#R# to #r#%s v%s#R#:" % (pkg1.name,
pkg1.version,
pkg2.name,
pkg2.version))
# Extract images for diff scripts files
fromfiles = set(pkg1._tarball.getnames(re_pattern="(parser|setup)/.*"))
tofiles = set(pkg2._tarball.getnames(re_pattern="(parser|setup)/.*"))
for f in fromfiles | tofiles:
# preparing from info
if f in fromfiles:
fromfile = os.path.join(pkg1.filename, f)
fromdata = pkg1._tarball.extractfile(f).readlines()
else:
fromfile = "/dev/null"
fromdata = ""
# preparing to info
if f in tofiles:
tofile = os.path.join(pkg2.filename, f)
todata = pkg2._tarball.extractfile(f).readlines()
else:
tofile = "/dev/null"
todata = ""
# generate diff
for line in difflib.unified_diff(fromdata, todata,
fromfile=fromfile, tofile=tofile):
# coloring diff
if line.startswith("+"):
out(u"#g#%s#R#" % line, endl="")
elif line.startswith("-"):
out(u"#r#%s#R#" % line, endl="")
elif line.startswith("@@"):
out(u"#c#%s#R#" % line, endl="")
else:
out(line, endl="")
def __init__(self, path, fileobj=None, md5name=False):
'''
Initialize a package image
fileobj must be a seekable fileobj
'''
Image.__init__(self)
self.path = istools.abspath(path)
self.base_path = os.path.dirname(self.path)
# tarball are named by md5 and not by real name
self.md5name = md5name
try:
if fileobj is None:
fileobj = PipeFile(self.path, "r")
else:
fileobj = PipeFile(mode="r", fileobj=fileobj)
memfile = cStringIO.StringIO()
fileobj.consume(memfile)
# close source
fileobj.close()
# get donwloaded size and md5
self.size = fileobj.read_size
self.md5 = fileobj.md5
memfile.seek(0)
self._tarball = Tarball.open(fileobj=memfile, mode='r:gz')
except Exception as e:
raise ISError(u"Unable to open image %s" % path, e)
self._metadata = self.read_metadata()
# print info
arrow(u"Image %s v%s loaded" % (self.name, self.version))
arrow(u"Author: %s" % self.author, 1)
arrow(u"Date: %s" % istools.time_rfc2822(self.date), 1)
# build payloads info
self.payload = {}
for pname, pval in self._metadata["payload"].items():
pfilename = u"%s-%s%s" % (self.filename[:-len(Image.extension)],
pname, Payload.extension)
if self.md5name:
ppath = os.path.join(self.base_path,
self._metadata["payload"][pname]["md5"])
else:
ppath = os.path.join(self.base_path, pfilename)
self.payload[pname] = Payload(pname, pfilename, ppath, **pval)
def __getattr__(self, name):
'''
Give direct access to description field
'''
if name in self._metadata:
return self._metadata[name]
raise AttributeError
@property
def filename(self):
'''
Return image filename
'''
return u"%s-%s%s" % (self.name, self.version, self.extension)
def read_metadata(self):
'''
Parse tarball and return metadata dict
'''
desc = {}
# check format
img_format = self._tarball.get_utf8("format")
try:
if float(img_format) >= math.floor(float(self.format)) + 1.0:
raise Exception()
except:
raise ISError(u"Invalid image format %s" % img_format)
desc["format"] = img_format
# check description
try:
img_desc = self._tarball.get_utf8("description.json")
desc.update(json.loads(img_desc))
self.check_image_name(desc["name"])
self.check_image_version(desc["version"])
# add is_min_version if not present
if "is_min_version" not in desc:
desc["is_min_version"] = 0
# check installsystems min version
if self.compare_versions(installsystems.version, desc["is_min_version"]) < 0:
raise ISError("Minimum Installsystems version not satisfied")
except Exception as e:
raise ISError(u"Invalid description", e)
# try to load changelog
try:
img_changelog = self._tarball.get_utf8("changelog")
desc["changelog"] = Changelog(img_changelog)
except KeyError:
desc["changelog"] = Changelog("")
except Exception as e:
warn(u"Invalid changelog: %s" % e)
return desc
def show(self, o_verbose=False, o_changelog=False, o_json=False):
'''
Display image content
'''
if o_json:
out(json.dumps(self._metadata))
else:
out(u'#light##yellow#Name:#reset# %s' % self.name)
out(u'#light##yellow#Version:#reset# %s' % self.version)
out(u'#yellow#Date:#reset# %s' % istools.time_rfc2822(self.date))
out(u'#yellow#Description:#reset# %s' % self.description)
out(u'#yellow#Author:#reset# %s' % self.author)
if o_verbose:
# field is_build_version is new in version 5. I can be absent.
try: out(u'#yellow#IS build version:#reset# %s' % self.is_build_version)
except AttributeError: pass
# field is_min_version is new in version 5. I can be absent.
try: out(u'#yellow#IS minimum version:#reset# %s' % self.is_min_version)
except AttributeError: pass
out(u'#yellow#MD5:#reset# %s' % self.md5)
if o_verbose:
payloads = self.payload
for payload_name in payloads:
payload = payloads[payload_name]
out(u'#light##yellow#Payload:#reset# %s' % payload_name)
out(u' #yellow#Date:#reset# %s' % istools.time_rfc2822(payload.mtime))
out(u' #yellow#Size:#reset# %s' % (istools.human_size(payload.size)))
out(u' #yellow#MD5:#reset# %s' % payload.md5)
# display image content
out('#light##yellow#Content:#reset#')
self._tarball.list(o_verbose)
# display changelog
if o_changelog:
self.changelog.show(int(self.version), o_verbose)
def check(self, message="Check MD5"):
'''
Check md5 and size of tarballs are correct
Download tarball from path and compare the loaded md5 and remote
'''
arrow(message)
arrowlevel(1)
# check image
fo = PipeFile(self.path, "r")
fo.consume()
fo.close()
if self.size != fo.read_size:
raise ISError(u"Invalid size of image %s" % self.name)
if self.md5 != fo.md5:
raise ISError(u"Invalid MD5 of image %s" % self.name)
# check payloads
for pay_name, pay_obj in self.payload.items():
arrow(pay_name)
pay_obj.check()
arrowlevel(-1)
def cat(self, filename):
'''
Display filename in the tarball
'''
filelist = self._tarball.getnames(glob_pattern=filename, dir=False)
if len(filelist) == 0:
warn(u"No file matching %s" % filename)
for filename in filelist:
arrow(filename)
out(self._tarball.get_utf8(filename))
def download(self, directory, force=False, image=True, payload=False):
'''
Download image in directory
Doesn't use in memory image because we cannot access it
This is done to don't parasitize self._tarfile access to memfile
'''
# check if destination exists
directory = os.path.abspath(directory)
if image:
dest = os.path.join(directory, self.filename)
if not force and os.path.exists(dest):
raise ISError(u"Image destination already exists: %s" % dest)
# some display
arrow(u"Downloading image in %s" % directory)
debug(u"Downloading %s from %s" % (self.filename, self.path))
# open source
fs = PipeFile(self.path, progressbar=True)
# check if announced file size is good
if fs.size is not None and self.size != fs.size:
raise ISError(u"Downloading image %s failed: Invalid announced size" % self.name)
# open destination
fd = open(self.filename, "wb")
fs.consume(fd)
fs.close()
fd.close()
if self.size != fs.consumed_size:
raise ISError(u"Download image %s failed: Invalid size" % self.name)
if self.md5 != fs.md5:
raise ISError(u"Download image %s failed: Invalid MD5" % self.name)
if payload:
for payname in self.payload:
arrow(u"Downloading payload %s in %s" % (payname, directory))
self.payload[payname].info
self.payload[payname].download(directory, force=force)
def extract(self, directory, force=False, payload=False, gendescription=False):
'''
Extract content of the image inside a repository
'''
# check validity of dest
if os.path.exists(directory):
if not os.path.isdir(directory):
raise ISError(u"Destination %s is not a directory" % directory)
if not force and len(os.listdir(directory)) > 0:
raise ISError(u"Directory %s is not empty (need force)" % directory)
else:
istools.mkdir(directory)
# extract content
arrow(u"Extracting image in %s" % directory)
self._tarball.extractall(directory)
# generate description file from description.json
if gendescription:
arrow(u"Generating description file in %s" % directory)
with open(os.path.join(directory, "description"), "w") as f:
f.write((istemplate.description % self._metadata).encode("UTF-8"))
# launch payload extraction
if payload:
for payname in self.payload:
# here we need to decode payname which is in unicode to escape
# tarfile to encode filename of file inside tarball inside unicode
dest = os.path.join(directory, "payload", payname.encode("UTF-8"))
arrow(u"Extracting payload %s in %s" % (payname, dest))
self.payload[payname].extract(dest, force=force)
def run_parser(self, **kwargs):
'''
Run parser scripts
'''
self._run_scripts("parser", **kwargs)
def run_setup(self, **kwargs):
'''
Run setup scripts
'''
self._run_scripts("setup", **kwargs)
def _run_scripts(self, directory, **kwargs):
'''
Run scripts in a tarball directory
'''
arrow(u"Run %s scripts" % directory)
arrowlevel(1)
# load modules
lib_list = self._tarball.getnames(re_pattern="lib/.*\.py")
modules = self._load_modules(lib_list, self._tarball.get_str)
# get list of parser scripts
l_scripts = self._tarball.getnames(re_pattern="%s/.*\.py" % directory)
# order matter!
l_scripts.sort()
# run scripts
for n_scripts in l_scripts:
arrow(os.path.basename(n_scripts))
old_level = arrowlevel(1)
# extract source code
try:
s_scripts = self._tarball.get_str(n_scripts)
except Exception as e:
raise ISError(u"Extracting script %s fail" % n_scripts, e)
# compile source code
try:
o_scripts = compile(s_scripts, n_scripts, "exec")
except Exception as e:
raise ISError(u"Unable to compile %s fail" % n_scripts, e)
# define execution context
gl = {}
for k in kwargs:
gl[k] = kwargs[k]
gl["image"] = self
# Add embedded modules
gl.update(modules)
# execute source code
try:
exec o_scripts in gl
except Exception as e:
raise ISError(u"Execution script %s fail" % n_scripts, e)
arrowlevel(level=old_level)
arrowlevel(-1)
class Payload(object):
'''
Payload class represents a payload object
'''
extension = ".isdata"
legit_attr = ("isdir", "md5", "size", "uid", "gid", "mode", "mtime", "compressor")
def __init__(self, name, filename, path, **kwargs):
object.__setattr__(self, "name", name)
object.__setattr__(self, "filename", filename)
object.__setattr__(self, "path", path)
# register legit param
for attr in self.legit_attr:
setattr(self, attr, None)
# set all named param
for kwarg in kwargs:
# do not use hasattr which use getattr and so call md5 checksum...
if kwarg in self.legit_attr:
setattr(self, kwarg, kwargs[kwarg])
def __getattr__(self, name):
# get all value with an understance as if there is no underscore
if hasattr(self, u"_%s" % name):
return getattr(self, u"_%s" % name)
raise AttributeError
def __setattr__(self, name, value):
# set all value which exists have no underscore, but where underscore exists
if name in self.legit_attr:
object.__setattr__(self, u"_%s" % name, value)
else:
object.__setattr__(self, name, value)
def checksummize(self):
'''
Fill missing md5/size about payload
'''
fileobj = PipeFile(self.path, "r")
fileobj.consume()
fileobj.close()
if self._size is None:
self._size = fileobj.read_size
if self._md5 is None:
self._md5 = fileobj.md5
@property
def md5(self):
'''
Return md5 of payload
'''
if self._md5 is None:
self.checksummize()
return self._md5
@property
def size(self):
'''
Return size of payload
'''
if self._size is None:
self.checksummize()
return self._size
@property
def uid(self):
'''
Return uid of owner of orginal payload
'''
return self._uid if self._uid is not None else 0
@property
def gid(self):
'''
Return gid of owner of orginal payload
'''
return self._gid if self._gid is not None else 0
@property
def mode(self):
'''
Return mode of orginal payload
'''
if self._mode is not None:
return self._mode
else:
umask = os.umask(0)
os.umask(umask)
return 0666 & ~umask
@property
def mtime(self):
'''
Return last modification time of orginal payload
'''
return self._mtime if self._mtime is not None else time.time()
@property
def compressor(self):
'''
Return payload compress format
'''
return self._compressor if self._compressor is not None else "gzip"
@property
def info(self):
'''
Return a dict of info about current payload
Auto calculated info like name and filename must not be here
'''
return {"md5": self.md5,
"size": self.size,
"isdir": self.isdir,
"uid": self.uid,
"gid": self.gid,
"mode": self.mode,
"mtime": self.mtime}
def check(self):
'''
Check that path correspond to current md5 and size
'''
if self._size is None or self._md5 is None:
debug("Check is called on payload with nothing to check")
return True
fileobj = PipeFile(self.path, "r")
fileobj.consume()
fileobj.close()
if self._size != fileobj.read_size:
raise ISError(u"Invalid size of payload %s" % self.name)
if self._md5 != fileobj.md5:
raise ISError(u"Invalid MD5 of payload %s" % self._md5)
def download(self, dest, force=False):
'''
Download payload in directory
'''
# if dest is a directory try to create file inside
if os.path.isdir(dest):
dest = os.path.join(dest, self.filename)
# try to create leading directories
elif not os.path.exists(os.path.dirname(dest)):
istools.mkdir(os.path.dirname(dest))
# check validity of dest
if os.path.exists(dest):
if os.path.isdir(dest):
raise ISError(u"Destination %s is a directory" % dest)
if not force:
raise ISError(u"File %s already exists" % dest)
# Open remote file
debug(u"Downloading payload %s from %s" % (self.filename, self.path))
fs = PipeFile(self.path, progressbar=True)
# check if announced file size is good
if fs.size is not None and self.size != fs.size:
raise ISError(u"Downloading payload %s failed: Invalid announced size" %
self.name)
fd = open(dest, "wb")
fs.consume(fd)
# closing fo
fs.close()
fd.close()
# checking download size
if self.size != fs.read_size:
raise ISError(u"Downloading payload %s failed: Invalid size" % self.name)
if self.md5 != fs.md5:
raise ISError(u"Downloading payload %s failed: Invalid MD5" % self.name)
def extract(self, dest, force=False, filelist=None):
'''
Extract payload into dest
filelist is a filter of file in tarball
force will overwrite existing file if exists
'''
try:
if self.isdir:
self.extract_tar(dest, force=force, filelist=filelist)
else:
self.extract_file(dest, force=force)
except Exception as e:
raise ISError(u"Extracting payload %s failed" % self.name, e)
def extract_tar(self, dest, force=False, filelist=None):
'''
Extract a payload which is a tarball.
This is used mainly to extract payload from a directory
'''
# check validity of dest
if os.path.exists(dest):
if not os.path.isdir(dest):
raise ISError(u"Destination %s is not a directory" % dest)
if not force and len(os.listdir(dest)) > 0:
raise ISError(u"Directory %s is not empty (need force)" % dest)
else:
istools.mkdir(dest)
# try to open payload file
try:
fo = PipeFile(self.path, progressbar=True)
except Exception as e:
raise ISError(u"Unable to open %s" % self.path)
# check if announced file size is good
if fo.size is not None and self.size != fo.size:
raise ISError(u"Invalid announced size on %s" % self.path)
# get compressor argv (first to escape file creation if not found)
a_comp = istools.get_compressor_path(self.compressor, compress=False)
a_tar = ["tar", "--extract", "--numeric-owner", "--ignore-zeros",
"--preserve-permissions", "--directory", dest]
# add optionnal selected filename for decompression
if filelist is not None:
a_tar += filelist
p_tar = subprocess.Popen(a_tar, shell=False, close_fds=True,
stdin=subprocess.PIPE)
p_comp = subprocess.Popen(a_comp, shell=False, close_fds=True,
stdin=subprocess.PIPE, stdout=p_tar.stdin)
# close tar fd
p_tar.stdin.close()
# push data into compressor
fo.consume(p_comp.stdin)
# close source fd
fo.close()
# checking downloaded size
if self.size != fo.read_size:
raise ISError("Invalid size")
# checking downloaded md5
if self.md5 != fo.md5:
raise ISError("Invalid MD5")
# close compressor pipe
p_comp.stdin.close()
# check compressor return 0
if p_comp.wait() != 0:
raise ISError(u"Compressor %s return is not zero" % a_comp[0])
# check tar return 0
if p_tar.wait() != 0:
raise ISError("Tar return is not zero")
def extract_file(self, dest, force=False):
'''
Copy a payload directly to a file
Check md5 on the fly
'''
# if dest is a directory try to create file inside
if os.path.isdir(dest):
dest = os.path.join(dest, self.name)
# try to create leading directories
elif not os.path.exists(os.path.dirname(dest)):
istools.mkdir(os.path.dirname(dest))
# check validity of dest
if os.path.exists(dest):
if os.path.isdir(dest):
raise ISError(u"Destination %s is a directory" % dest)
if not force:
raise ISError(u"File %s already exists" % dest)
# get compressor argv (first to escape file creation if not found)
a_comp = istools.get_compressor_path(self.compressor, compress=False)
# try to open payload file (source)
try:
f_src = PipeFile(self.path, "r", progressbar=True)
except Exception as e:
raise ISError(u"Unable to open payload file %s" % self.path, e)
# check if announced file size is good
if f_src.size is not None and self.size != f_src.size:
raise ISError(u"Invalid announced size on %s" % self.path)
# opening destination
try:
f_dst = open(dest, "wb")
except Exception as e:
raise ISError(u"Unable to open destination file %s" % dest, e)
# run compressor process
p_comp = subprocess.Popen(a_comp, shell=False, close_fds=True,
stdin=subprocess.PIPE, stdout=f_dst)
# close destination file
f_dst.close()
# push data into compressor
f_src.consume(p_comp.stdin)
# closing source fo
f_src.close()
# checking download size
if self.size != f_src.read_size:
raise ISError("Invalid size")
# checking downloaded md5
if self.md5 != f_src.md5:
raise ISError("Invalid MD5")
# close compressor pipe
p_comp.stdin.close()
# check compressor return 0
if p_comp.wait() != 0:
raise ISError(u"Compressor %s return is not zero" % a_comp[0])
# settings file orginal rights
istools.chrights(dest, self.uid, self.gid, self.mode, self.mtime)
class Changelog(dict):
'''
Object representing a changelog in memory
'''
def __init__(self, data):
self.verbatim = u""
self.load(data)
def load(self, data):
'''
Load a changelog file
'''
# ensure data are correct UTF-8
if isinstance(data, str):
try:
data = unicode(data, "UTF-8")
except UnicodeDecodeError:
raise ISError("Invalid character encoding in changelog")
version = None
lines = data.split("\n")
for line in lines:
# ignore empty lines
if len(line.strip()) == 0:
continue
# ignore comments
if line.lstrip().startswith("#"):
continue
# try to match a new version
m = re.match("\[(\d+)\]", line.lstrip())
if m is not None:
version = int(m.group(1))
self[version] = []
continue
# if line are out of a version => invalid format
if version is None:
raise ISError("Invalid format: Line outside version")
# add line to version changelog
self[version] += [line]
# save original
self.verbatim = data
def show(self, version=None, verbose=False):
'''
Show changelog for a given version or all
'''
out('#light##yellow#Changelog:#reset#')
# if no version take the hightest
if version is None:
version = max(self)
# display asked version
if version in self:
self._show_version(version)
# display all version in verbose mode
if verbose:
for ver in sorted((k for k in self if k < version), reverse=True):
self._show_version(ver)
def _show_version(self, version):
'''
Display a version content
'''
out(u' #yellow#Version:#reset# %s' % version)
out(os.linesep.join(self[version]))
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/printer.py 0000664 0000000 0000000 00000012044 12131501173 0026466 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
'''
Install Systems Printer module
'''
import locale
import sys
import os
import re
import installsystems
from installsystems.exception import *
NOCOLOR = False
COLOR = {
# regular
"black": "\033[30m",
"B": "\033[30m",
"red": "\033[31m",
"r": "\033[31m",
"green": "\033[32m",
"g": "\033[32m",
"yellow": "\033[33m",
"y": "\033[33m",
"blue": "\033[34m",
"b": "\033[34m",
"purple": "\033[35m",
"p": "\033[35m",
"cyan": "\033[36m",
"c": "\033[36m",
"white": "\033[37m",
"w": "\033[37m",
# others
"under": "\033[4m",
"u": "\033[4m",
"light": "\033[1m",
"l": "\033[1m",
"reset": "\033[m",
"R": "\033[m",
}
# arrow_level is between 1 and 3
# is the level of indentation of arrow
_arrow_level = 1
def out(message="", fd=sys.stdout, endl=os.linesep, flush=True):
'''
Print message colorised in fd ended by endl
'''
# color subsitution
color_pattern = "#(%s)#" % "|".join(COLOR)
if not fd.isatty() or NOCOLOR:
f = lambda obj: ""
else:
f = lambda obj: COLOR[obj.group(1)]
message = re.sub(color_pattern, f, message)
# convert unicode into str before write
# this can cause issue on python 2.6
if type(message) == unicode:
message = message.encode(locale.getpreferredencoding(), "replace")
# printing
fd.write("%s%s" % (message, endl))
if flush:
fd.flush()
def err(message, fd=sys.stderr, endl=os.linesep):
'''
Print a message on stderr
'''
out(message, fd, endl)
def fatal(message, quit=True, fd=sys.stderr, endl=os.linesep):
out(u"#light##red#Fatal:#reset# #red#%s#reset#" % message, fd, endl)
if sys.exc_info()[0] is not None and installsystems.verbosity > 1:
raise
if quit:
os._exit(21)
def error(message=None, exception=None, quit=True, fd=sys.stderr, endl=os.linesep):
# create error message
pmesg = u""
if message is not None:
pmesg += unicode(message)
if exception is not None:
if pmesg == "":
pmesg += unicode(exception)
else:
pmesg += u": %s" % unicode(exception)
# print error message
if pmesg != "":
out(u"#light##red#Error:#reset# #red#%s#reset#" % pmesg, fd, endl)
# print traceback in debug mode
if installsystems.verbosity > 1 and isinstance(exception, ISException):
exception.print_tb(fd)
elif installsystems.verbosity > 1:
out("#l##B#", fd=fd, endl="")
traceback.print_exc(file=fd)
out("#R#", fd=fd, endl="")
if quit:
exit(42)
def warn(message, fd=sys.stderr, endl=os.linesep):
out(u"#light##yellow#Warning:#reset# #yellow#%s#reset#" % message, fd, endl)
def info(message, fd=sys.stderr, endl=os.linesep):
if installsystems.verbosity > 0:
out(u"#light#Info:#reset# %s" % message, fd, endl)
def debug(message, fd=sys.stderr, endl=os.linesep):
if installsystems.verbosity > 1:
out(u"#light##black#%s#reset#" % message, fd, endl)
def arrowlevel(inc=None, level=None):
global _arrow_level
old_level = _arrow_level
if level is not None:
_arrow_level = max(1, min(4, level))
if inc is not None:
_arrow_level = max(1, min(4, _arrow_level + inc))
return old_level
def arrow(message, inclevel=None, level=None, fd=sys.stdout, endl=os.linesep):
if installsystems.verbosity == 0:
return
# define new level
old_level = arrowlevel(inc=inclevel, level=level)
if _arrow_level == 1:
out(u"#light##red#=>#reset# %s" % message, fd=fd, endl=endl)
elif _arrow_level == 2:
out(u" #light##yellow#=>#reset# %s" % message, fd=fd, endl=endl)
elif _arrow_level == 3:
out(u" #light##blue#=>#reset# %s" % message, fd=fd, endl=endl)
elif _arrow_level == 4:
out(u" #light##green#=>#reset# %s" % message, fd=fd, endl=endl)
# restore old on one shot level
arrowlevel(level = old_level)
def ask(message, fd=sys.stdout, endl=""):
'''
Ask a question on stdin
'''
out(message, fd=fd, endl=endl, flush=True)
return raw_input()
def confirm(message=None, ans=None, fd=sys.stdout, endl=""):
'''
Ask a question on stdin
'''
if ans is None:
ans = "yes"
if message is None:
message = u"#u##l##w#Are you sure?#R# (%s) " % ans
return ask(message, fd, endl) == ans
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/repository.py 0000664 0000000 0000000 00000114135 12131501173 0027226 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
'''
Repository stuff
'''
import os
import re
import time
import shutil
import pwd
import grp
import tempfile
import fnmatch
import cStringIO
import json
import installsystems
import installsystems.tools as istools
from installsystems.exception import *
from installsystems.printer import *
from installsystems.tarball import Tarball
from installsystems.tools import PipeFile
from installsystems.image import Image, PackageImage
from installsystems.database import Database
class Repository(object):
'''
Repository class
'''
@staticmethod
def is_repository_name(name):
return re.match("^[-_\w]+$", name) is not None
@staticmethod
def check_repository_name(name):
'''
Raise exception is repository name is invalid
'''
if not Repository.is_repository_name(name):
raise ISError(u"Invalid repository name %s" % name)
return name
@staticmethod
def split_image_path(path):
'''
Split an image path (repo/image:version)
in a tuple (repo, image, version)
'''
x = re.match(u"^(?:([^/:]+)/)?([^/:]+)?(?::v?([^/:]+)?)?$", path)
if x is None:
raise ISError(u"invalid image path: %s" % path)
return x.group(1, 2, 3)
@staticmethod
def split_repository_list(repolist, filter=None):
'''
Return a list of repository from an comma/spaces separated names of repo
'''
if filter is None:
filter = Repository.is_repository_name
return [r for r in re.split("[ ,\n\t\v]+", repolist) if filter(r)]
@classmethod
def diff(cls, repo1, repo2):
'''
Comptue a diff between two repositories
'''
arrow(u"Diff between repositories #y#%s#R# and #g#%s#R#" % (repo1.config.name,
repo2.config.name))
# Get info from databases
i_dict1 = dict((b[0], b[1:]) for b in repo1.db.ask(
"SELECT md5, name, version FROM image").fetchall())
i_set1 = set(i_dict1.keys())
i_dict2 = dict((b[0], b[1:]) for b in repo2.db.ask(
"SELECT md5, name, version FROM image").fetchall())
i_set2 = set(i_dict2.keys())
p_dict1 = dict((b[0], b[1:]) for b in repo1.db.ask(
"SELECT md5, name FROM payload").fetchall())
p_set1 = set(p_dict1.keys())
p_dict2 = dict((b[0], b[1:]) for b in repo2.db.ask(
"SELECT md5, name FROM payload").fetchall())
p_set2 = set(p_dict2.keys())
# computing diff
i_only1 = i_set1 - i_set2
i_only2 = i_set2 - i_set1
p_only1 = p_set1 - p_set2
p_only2 = p_set2 - p_set1
# printing functions
pimg = lambda r,c,m,d,: out("#%s#Image only in repository %s: %s v%s (%s)#R#" %
(c, r.config.name, d[m][0], d[m][1], m))
ppay = lambda r,c,m,d,: out("#%s#Payload only in repository %s: %s (%s)#R#" %
(c, r.config.name, d[m][0], m))
# printing image diff
for md5 in i_only1: pimg(repo1, "y", md5, i_dict1)
for md5 in p_only1: ppay(repo1, "y", md5, p_dict1)
for md5 in i_only2: pimg(repo2, "g", md5, i_dict2)
for md5 in p_only2: ppay(repo2, "g", md5, p_dict2)
def __init__(self, config):
self.config = config
self.local = istools.isfile(self.config.path)
if not self.config.offline:
try:
self.db = Database(config.dbpath)
except:
debug(u"Unable to load database %s" % config.dbpath)
self.config.offline = True
if self.config.offline:
debug(u"Repository %s is offline" % config.name)
def __getattribute__(self, name):
'''
Raise an error if repository is unavailable
Unavailable can be caused because db is not accessible or
because repository is not initialized
'''
config = object.__getattribute__(self, "config")
# config, init, local are always accessible
if name in ("init", "config", "local"):
return object.__getattribute__(self, name)
# if no db (not init or not accessible) raise error
if config.offline:
raise ISError(u"Repository %s is offline" % config.name)
return object.__getattribute__(self, name)
@property
def version(self):
'''
Return repository version
'''
return self.db.version
def init(self):
'''
Initialize an empty base repository
'''
config = self.config
# check local repository
if not self.local:
raise ISError(u"Repository creation must be local")
# create base directories
arrow("Creating base directories")
arrowlevel(1)
# creating local directory
try:
if os.path.exists(config.path):
arrow(u"%s already exists" % config.path)
else:
istools.mkdir(config.path, config.uid, config.gid, config.dmod)
arrow(u"%s directory created" % config.path)
except Exception as e:
raise ISError(u"Unable to create directory %s" % config.path, e)
arrowlevel(-1)
# create database
d = Database.create(config.dbpath)
istools.chrights(config.dbpath, uid=config.uid,
gid=config.gid, mode=config.fmod)
# load database
self.db = Database(config.dbpath)
# mark repo as not offline
self.config.offline = False
# create/update last file
self.update_last()
def update_last(self):
'''
Update last file to current time
'''
# check local repository
if not self.local:
raise ISError(u"Repository addition must be local")
try:
arrow("Updating last file")
last_path = os.path.join(self.config.path, self.config.lastname)
open(last_path, "w").write("%s\n" % int(time.time()))
istools.chrights(last_path, self.config.uid, self.config.gid, self.config.fmod)
except Exception as e:
raise ISError(u"Update last file failed", e)
def last(self, name):
'''
Return last version of name in repo or -1 if not found
'''
r = self.db.ask("SELECT version FROM image WHERE name = ? ORDER BY version DESC LIMIT 1", (name,)).fetchone()
# no row => no way
if r is None:
return -1
# return last
return r[0]
def add(self, image, delete=False):
'''
Add a packaged image to repository
if delete is true, remove original files
'''
# check local repository
if not self.local:
raise ISError(u"Repository addition must be local")
# cannot add already existant image
if self.has(image.name, image.version):
raise ISError(u"Image already in database, delete first!")
# adding file to repository
arrow("Copying images and payload")
for obj in [ image ] + image.payload.values():
dest = os.path.join(self.config.path, obj.md5)
basesrc = os.path.basename(obj.path)
if os.path.exists(dest):
arrow(u"Skipping %s: already exists" % basesrc, 1)
else:
arrow(u"Adding %s (%s)" % (basesrc, obj.md5), 1)
dfo = open(dest, "wb")
sfo = PipeFile(obj.path, "r", progressbar=True)
sfo.consume(dfo)
sfo.close()
dfo.close()
istools.chrights(dest, self.config.uid,
self.config.gid, self.config.fmod)
# copy is done. create a image inside repo
r_image = PackageImage(os.path.join(self.config.path, image.md5),
md5name=True)
# checking must be done with original md5
r_image.md5 = image.md5
# checking image and payload after copy
r_image.check("Check image and payload")
# add description to db
arrow("Adding metadata")
self.db.begin()
# insert image information
arrow("Image", 1)
self.db.ask("INSERT INTO image values (?,?,?,?,?,?,?)",
(image.md5,
image.name,
image.version,
image.date,
image.author,
image.description,
image.size,
))
# insert data informations
arrow("Payloads", 1)
for name, obj in image.payload.items():
self.db.ask("INSERT INTO payload values (?,?,?,?,?)",
(obj.md5,
image.md5,
name,
obj.isdir,
obj.size,
))
# on commit
self.db.commit()
# update last file
self.update_last()
# removing orginal files
if delete:
arrow("Removing original files")
for obj in [ image ] + image.payload.values():
arrow(os.path.basename(obj.path), 1)
os.unlink(obj.path)
def getallmd5(self):
'''
Get list of all md5 in DB
'''
res = self.db.ask("SELECT md5 FROM image UNION SELECT md5 FROM payload").fetchall()
return [ md5[0] for md5 in res ]
def check(self):
'''
Check repository for unreferenced and missing files
'''
# Check if the repo is local
if not self.local:
raise ISError(u"Repository must be local")
local_files = set(os.listdir(self.config.path))
local_files.remove(self.config.dbname)
local_files.remove(self.config.lastname)
db_files = set(self.getallmd5())
# check missing files
arrow("Checking missing files")
missing_files = db_files - local_files
if len(missing_files) > 0:
out(os.linesep.join(missing_files))
# check unreferenced files
arrow("Checking unreferenced files")
unref_files = local_files - db_files
if len(unref_files) > 0:
out(os.linesep.join(unref_files))
# check corruption of local files
arrow("Checking corrupted files")
for f in local_files:
fo = PipeFile(os.path.join(self.config.path, f))
fo.consume()
fo.close()
if fo.md5 != f:
out(f)
def clean(self, force=False):
'''
Clean the repository's content
'''
# Check if the repo is local
if not self.local:
raise ISError(u"Repository must be local")
allmd5 = set(self.getallmd5())
repofiles = set(os.listdir(self.config.path)) - set([self.config.dbname, self.config.lastname])
dirtyfiles = repofiles - allmd5
if len(dirtyfiles) > 0:
# print dirty files
arrow("Dirty files:")
for f in dirtyfiles:
arrow(f, 1)
# ask confirmation
if not force and not confirm("Remove dirty files? (yes) "):
raise ISError(u"Aborted!")
# start cleaning
arrow("Cleaning")
for f in dirtyfiles:
p = os.path.join(self.config.path, f)
arrow(u"Removing %s" % p, 1)
try:
if os.path.isdir(p):
os.rmdir(p)
else:
os.unlink(p)
except:
warn(u"Removing %s failed" % p)
else:
arrow("Nothing to clean")
def delete(self, name, version, payloads=True):
'''
Delete an image from repository
'''
# check local repository
if not self.local:
raise ISError(u"Repository deletion must be local")
# get md5 of files related to images (exception is raised if not exists
md5s = self.getmd5(name, version)
# cleaning db (must be done before cleaning)
arrow("Cleaning database")
arrow("Remove payloads from database", 1)
self.db.begin()
for md5 in md5s[1:]:
self.db.ask("DELETE FROM payload WHERE md5 = ? AND image_md5 = ?",
(md5, md5s[0])).fetchone()
arrow("Remove image from database", 1)
self.db.ask("DELETE FROM image WHERE md5 = ?",
(md5s[0],)).fetchone()
self.db.commit()
# Removing files
arrow("Removing files from pool")
# if asked don't remove payloads
if not payloads:
md5s = [ md5s[0] ]
arrowlevel(1)
for md5 in md5s:
self._remove_file(md5)
arrowlevel(-1)
# update last file
self.update_last()
def images(self):
'''
Return a dict of informations on images
'''
db_images = self.db.ask("SELECT md5, name, version, date,\
author, description, size FROM image ORDER BY name, version").fetchall()
images = []
field = ("md5", "name", "version", "date", "author", "description", "size")
for info in db_images:
d = dict(zip(field, info))
d["repo"] = self.config.name
d["url"] = os.path.join(self.config.path, d["md5"])
images.append(d)
return images
def payloads(self):
'''
Return a dict of informations on payloads
'''
db_payloads = self.db.ask("SELECT payload.md5,payload.size,payload.isdir,image.name,image.version,payload.name FROM payload inner join image on payload.image_md5 = image.md5").fetchall()
res = {}
for payload in db_payloads:
md5 = payload[0]
# create entry if not exists
if md5 not in res:
res[md5] = {"size": payload[1], "isdir": payload[2], "images": {}}
# add image to list
imgpath = u"%s/%s:%s" % (self.config.name, payload[3], payload[4])
res[md5]["images"][imgpath] = {"repo": self.config.name,
"imgname": payload[3],
"imgver": payload[4],
"payname": payload[5]}
return res
def search(self, pattern):
'''
Search pattern in a repository
'''
images = self.db.ask("SELECT name, version, author, description\
FROM image\
WHERE name LIKE ? OR\
description LIKE ? OR\
author LIKE ?",
tuple( [u"%%%s%%" % pattern ] * 3)
).fetchall()
for name, version, author, description in images:
arrow(u"%s v%s" % (name, version), 1)
out(u" #yellow#Author:#reset# %s" % author)
out(u" #yellow#Description:#reset# %s" % description)
def _remove_file(self, filename):
'''
Remove a filename from pool. Check if it's not needed by db before
'''
# check existance in table image
have = False
for table in ("image", "payload"):
have = have or self.db.ask(u"SELECT md5 FROM %s WHERE md5 = ? LIMIT 1" % table,
(filename,)).fetchone() is not None
# if no reference, delete!
if not have:
arrow(u"%s, deleted" % filename)
os.unlink(os.path.join(self.config.path, filename))
else:
arrow(u"%s, skipped" % filename)
def has(self, name, version):
'''
Return the existance of a package
'''
return self.db.ask("SELECT name,version FROM image WHERE name = ? AND version = ? LIMIT 1", (name,version)).fetchone() is not None
def get(self, name, version=None):
'''
Return an image from a name and version
'''
# is no version take the last
if version is None:
version = self.last(name)
if version < 0:
raise ISError(u"Unable to find image %s in %s" % (name,
self.config.name))
# get file md5 from db
r = self.db.ask("select md5 from image where name = ? and version = ? limit 1",
(name, version)).fetchone()
if r is None:
raise ISError(u"Unable to find image %s v%s in %s" % (name, version,
self.config.name))
path = os.path.join(self.config.path, r[0])
# getting the file
arrow(u"Loading image %s v%s from repository %s" % (name,
version,
self.config.name))
memfile = cStringIO.StringIO()
try:
fo = PipeFile(path, "r")
fo.consume(memfile)
fo.close()
except Exception as e:
raise ISError(u"Loading image %s v%s failed" % (name, version), e)
memfile.seek(0)
pkg = PackageImage(path, fileobj=memfile, md5name=True)
if pkg.md5 != r[0]:
raise ISError(u"Image MD5 verification failure")
return pkg
def getmd5(self, name, version):
'''
Return an image md5 and payload md5 from name and version. Order matter !
Image md5 will still be the first
'''
# get file md5 from db
a = self.db.ask("SELECT md5 FROM image WHERE name = ? AND version = ? LIMIT 1",
(name,version)).fetchone()
if a is None:
raise ISError(u"No such image %s version %s" % (name, version))
b = self.db.ask("SELECT md5 FROM payload WHERE image_md5 = ?",
(a[0],)).fetchall()
return [ a[0] ] + [ x[0] for x in b ]
class RepositoryManager(object):
'''
Manage multiple repostories
This call implement a cache and a manager for multiple repositories
Default repository timeout is 3
'''
def __init__(self, cache_path=None, timeout=None, filter=None, search=None):
self.repos = []
self.tempfiles = []
self.filter = [] if filter is None else filter
self.search = [] if search is None else search
self.timeout = timeout or 3
debug(u"Repostiory timeout setted to %ds" % self.timeout)
if cache_path is None:
self.cache_path = None
debug("No repository cache")
else:
if not istools.isfile(cache_path):
raise NotImplementedError("Repository cache must be local")
self.cache_path = os.path.abspath(cache_path)
# must_path is a list of directory which must exists
# create directory if not exists
if not os.path.exists(self.cache_path):
os.mkdir(self.cache_path)
# ensure directories are avaiblable
if not os.access(self.cache_path, os.W_OK | os.X_OK):
raise ISError(u"%s is not writable or executable" % self.cache_path)
debug(u"Repository cache is in %s" % self.cache_path)
def __del__(self):
# delete temporary files (used by db)
for f in self.tempfiles:
try:
debug(u"Removing temporary db file %s" % f)
os.unlink(f)
except OSError:
pass
def __len__(self):
'''
Return the number of repository registered
'''
return len(self.repos)
def __getitem__(self, key):
'''
Return a repostiory by its position in list
'''
if isinstance(key, int):
return self.repos[key]
elif isinstance(key, basestring):
for repo in self.repos:
if repo.config.name == key:
return repo
raise IndexError(u"No repository named: %s" % key)
else:
raise TypeError(u"Invalid type %s for %s" % (type(key), key))
def __contains__(self, key):
'''
Check if a key is a repository name
'''
for r in self.repos:
if r.config.name == key:
return True
return False
def register(self, config, temp=False, nosync=False, offline=False):
'''
Register a repository from its config
temp: repository is stored in a temporary location
nosync: register repository as online, but no sync is done before
offline: repository is marked offline
'''
# check filter on name
if len(self.filter) > 0:
if config.name not in self.filter:
debug(u"Filtering repository %s" % config.name)
return
# repository is offline
if config.offline or offline:
debug(u"Registering offline repository %s (%s)" % (config.path, config.name))
# we must force offline in cast of argument offline
config.offline = True
self.repos.append(Repository(config))
# if path is local, no needs to create a cache
elif istools.isfile(config.path):
debug(u"Registering direct repository %s (%s)" % (config.path, config.name))
self.repos.append(Repository(config))
# path is remote, we need to create a cache
else:
debug(u"Registering cached repository %s (%s)" % (config.path, config.name))
self.repos.append(self._cachify(config, temp, nosync))
def _cachify(self, config, temp=False, nosync=False):
'''
Return a config of a cached repository from an orignal config file
:param config: repository configuration
:param temp: repository db should be stored in a temporary location
:param nosync: if a cache exists, don't try to update it
'''
# if cache is disable => temp =True
if self.cache_path is None:
temp = True
try:
original_dbpath = config.dbpath
if temp and nosync:
raise IOError("sync is disabled")
elif temp:
# this is a temporary cached repository
tempfd, config.dbpath = tempfile.mkstemp()
os.close(tempfd)
self.tempfiles.append(config.dbpath)
else:
config.dbpath = os.path.join(self.cache_path, config.name)
if not nosync:
# Open remote database
rdb = PipeFile(original_dbpath, timeout=self.timeout)
# get remote last modification
if rdb.mtime is None:
# We doesn't have modification time, we use the last file
try:
rlast = int(PipeFile(config.lastpath, mode='r',
timeout=self.timeout).read().strip())
except IOError:
rlast = -1
else:
rlast = rdb.mtime
# get local last value
if os.path.exists(config.dbpath):
llast = int(os.stat(config.dbpath).st_mtime)
else:
llast = -2
# if repo is out of date, download it
if rlast != llast:
try:
arrow(u"Downloading %s" % original_dbpath)
rdb.progressbar = True
ldb = open(config.dbpath, "wb")
rdb.consume(ldb)
ldb.close()
rdb.close()
istools.chrights(config.dbpath,
uid=config.uid,
gid=config.gid,
mode=config.fmod,
mtime=rlast)
except:
if os.path.exists(config.dbpath):
os.unlink(config.dbpath)
raise
except IOError as e:
# if something append bad during caching, we mark repo as offline
debug(u"Unable to cache repository %s: %s" % (config.name, e))
config.offline = True
return Repository(config)
@property
def names(self):
'''
Return list of repository names
'''
return [ r.config.name for r in self.repos ]
@property
def onlines(self):
'''
Return list of online repository names
'''
return [ r.config.name for r in self.repos if not r.config.offline ]
@property
def offlines(self):
'''
Return list of offlines repository names
'''
return [ r.config.name for r in self.repos if r.config.offline ]
def select_images(self, patterns):
'''
Return a list of available images
'''
if len(self.onlines) == 0:
raise ISError(u"No online repository")
ans = {}
for pattern in patterns:
path, image, version = Repository.split_image_path(pattern)
# no image name, skip it
if image is None:
warn(u"No image name in pattern %s, skipped" % pattern)
continue
# building image list
images = {}
for reponame in self.onlines:
for img in self[reponame].images():
imgname = u"%s/%s:%s" % (reponame, img["name"], img["version"])
images[imgname] = img
# No path means only in searchable repositories
if path is None:
for k, v in images.items():
if v["repo"] not in self.search:
del images[k]
path = "*"
# No version means last version
if version is None:
version = "*"
for repo in set((images[i]["repo"] for i in images)):
for img in set((images[i]["name"] for i in images if images[i]["repo"] == repo)):
versions = [ images[i]['version']
for i in images if images[i]["repo"] == repo and images[i]["name"] == img ]
f = lambda x,y: x if istools.compare_versions(x, y) > 0 else y
last = reduce(f, versions)
versions.remove(last)
for rmv in versions:
del images[u"%s/%s:%s" % (repo, img, rmv)]
# filter with pattern on path
filter_pattern = u"%s/%s:%s" % (path, image, version)
for k in images.keys():
if not fnmatch.fnmatch(k, filter_pattern):
del images[k]
ans.update(images)
return ans
def search_image(self, pattern):
'''
Search pattern accross all registered repositories
'''
for repo in self.onlines:
arrow(self[repo].config.name)
self[repo].search(pattern)
def show_images(self, patterns, o_json=False, o_long=False, o_md5=False,
o_date=False, o_author=False, o_size=False,
o_url=False, o_description=False):
'''
Show images inside manager
'''
# get images list
images = self.select_images(patterns)
# display result
if o_json:
s = json.dumps(images)
else:
l = []
for imgp in sorted(images.keys()):
img = images[imgp]
l.append(u"%s#R#/#l##b#%s#R#:#p#%s#R#" % (
img["repo"], img["name"], img["version"]))
if o_md5 or o_long:
l[-1] = l[-1] + u" (#y#%s#R#)" % img["md5"]
if o_date or o_long:
l.append(u" #l#date:#R# %s" % istools.time_rfc2822(img["date"]))
if o_author or o_long:
l.append(u" #l#author:#R# %s" % img["author"])
if o_size or o_long:
l.append(u" #l#size:#R# %s" % istools.human_size(img["size"]))
if o_url or o_long:
l.append(u" #l#url:#R# %s" % img["url"])
if o_description or o_long:
l.append(u" #l#description:#R# %s" % img["description"])
s = os.linesep.join(l)
if len(s) > 0:
out(s)
def select_payloads(self, patterns):
'''
Return a list of available payloads
'''
if len(self.onlines) == 0:
raise ISError(u"No online repository")
# building payload list
paylist = {}
for reponame in self.onlines:
for md5, info in self[reponame].payloads().items():
if md5 not in paylist:
paylist[md5] = info
else:
paylist[md5]["images"].update(info["images"])
# check if pattern is md5 startpath
ans = {}
for pattern in patterns:
for md5 in paylist.keys():
if md5.startswith(pattern):
ans[md5] = paylist[md5]
return ans
def show_payloads(self, patterns, o_images=False, o_json=False):
'''
Show payloads inside manager
'''
# get payload list
payloads = self.select_payloads(patterns)
# display result
if o_json:
s = json.dumps(payloads)
else:
l = []
for payname in sorted(payloads.keys()):
pay = payloads[payname]
l.append(u"#l##y#%s#R#" % payname)
l.append(u" size: %s" % istools.human_size(pay["size"]))
l.append(u" directory: %s" % bool(pay["isdir"]))
l.append(u" image count: %d" % len(pay["images"]))
l.append(u" names: %s" % ", ".join(set((v["payname"] for v in pay["images"].values()))))
if o_images:
l.append(u" images:")
for path, obj in pay["images"].items():
l.append(u" %s#R#/#l##b#%s#R#:#p#%s#R# (%s)" % (
obj["repo"], obj["imgname"], obj["imgver"], obj["payname"]))
s = os.linesep.join(l)
if len(s) > 0:
out(s)
def select_repositories(self, patterns):
'''
Return a list of repository
'''
ans = set()
for pattern in patterns:
ans |= set(fnmatch.filter(self.names, pattern))
return sorted(ans)
def purge_repositories(self, patterns):
'''
Remove local cached repository files
'''
for reponame in self.select_repositories(patterns):
arrow(u"Purging cache of repository %s" % reponame)
db = os.path.join(self.cache_path, reponame)
if os.path.lexists(db):
try:
os.unlink(db)
arrow("done", 1)
except:
arrow("failed", 1)
else:
arrow("nothing to do", 1)
def show_repositories(self, patterns, local=None, online=None,
o_url=False, o_state=False, o_json=False):
'''
Show repository inside manager
if :param online: is true, list only online repositories
if :param online: is false, list only offline repostiories
if :param online: is None, list both online and offline repostiories.
if :param local: is true, list only local repositories
if :param local: is false, list only remote repostiories
if :param local: is None, list both local and remote repostiories.
'''
# build repositories dict
repos = {}
for reponame in self.select_repositories(patterns):
repo = self[reponame]
if repo.config.offline and online is True:
continue
if not repo.config.offline and online is False:
continue
if repo.local and local is False:
continue
if not repo.local and local is True:
continue
repos[reponame] = dict(repo.config.items())
repos[reponame]["local"] = repo.local
# display result
if o_json:
s = json.dumps(repos)
else:
l = []
for name, repo in repos.items():
ln = ""
so = "#l##r#Off#R# " if repo["offline"] else "#l##g#On#R# "
sl = "#l##y#Local#R# " if repo["local"] else "#l##c#Remote#R# "
rc = "#l##r#" if repo["offline"] else "#l##g#"
if o_state:
ln += u"%s%s " % (so, sl)
rc = "#l##b#"
ln += u"%s%s#R#"% (rc, name)
if o_url:
ln += u" (%s)" % repo["path"]
l.append(ln)
s = os.linesep.join(l)
out(s)
class RepositoryConfig(object):
'''
Repository configuration container
'''
def __init__(self, name, **kwargs):
# set default value for arguments
self._valid_param = ("name", "path", "dbpath", "lastpath",
"uid", "gid", "fmod", "dmod", "offline")
self.name = Repository.check_repository_name(name)
self.path = ""
self._offline = False
self._dbpath = None
self.dbname = "db"
self._lastpath = None
self.lastname = "last"
self._uid = os.getuid()
self._gid = os.getgid()
umask = os.umask(0)
os.umask(umask)
self._fmod = 0666 & ~umask
self._dmod = 0777 & ~umask
self.update(**kwargs)
def __str__(self):
l = []
for k, v in self.items():
l.append(u"%s: %s" % (k, v))
return os.linesep.join(l)
def __eq__(self, other):
return vars(self) == vars(other)
def __ne__(self, other):
return not (self == other)
def __contains__(self, key):
return key in self.__dict__
def __getitem__(self, key):
if key not in self._valid_param:
raise IndexError(key)
return getattr(self, key)
def __iter__(self):
for p in self._valid_param:
yield p
def items(self):
for p in self:
yield p, self[p]
@property
def lastpath(self):
'''
Return the last file complete path
'''
if self._lastpath is None:
return os.path.join(self.path, self.lastname)
return self._lastpath
@lastpath.setter
def lastpath(self, value):
'''
Set last path
'''
self._lastpath = value
@property
def dbpath(self):
'''
Return the db complete path
'''
if self._dbpath is None:
return os.path.join(self.path, self.dbname)
return self._dbpath
@dbpath.setter
def dbpath(self, value):
'''
Set db path
'''
# dbpath must be local, sqlite3 requirment
if not istools.isfile(value):
raise ValueError("Database path must be local")
self._dbpath = os.path.abspath(value)
@property
def uid(self):
'''
Return owner of repository
'''
return self._uid
@uid.setter
def uid(self, value):
'''
Define user name owning repository
'''
if not value.isdigit():
self._uid = pwd.getpwnam(value).pw_uid
else:
self._uid = int(value)
@property
def gid(self):
'''
Return group of the repository
'''
return self._gid
@gid.setter
def gid(self, value):
'''
Define group owning repository
'''
if not value.isdigit():
self._gid = grp.getgrnam(value).gr_gid
else:
self._gid = int(value)
@property
def fmod(self):
'''
Return new file mode
'''
return self._fmod
@fmod.setter
def fmod(self, value):
'''
Define new file mode
'''
if value.isdigit():
self._fmod = int(value, 8)
else:
raise ValueError("File mode must be an integer")
@property
def dmod(self):
'''
Return new directory mode
'''
return self._dmod
@dmod.setter
def dmod(self, value):
'''
Define new directory mode
'''
if value.isdigit():
self._dmod = int(value, 8)
else:
raise ValueError("Directory mode must be an integer")
@property
def offline(self):
'''
Get the offline state of a repository
'''
return self._offline
@offline.setter
def offline(self, value):
if type(value) in (str, unicode):
value = value.lower() not in ("false", "no", "0")
elif type(value) is not bool:
value = bool(value)
self._offline = value
def update(self, *args, **kwargs):
'''
Update attribute with checking value
All attribute must already exists
'''
# autoset parameter in cmdline
for k in kwargs:
if hasattr(self, k):
try:
setattr(self, k, kwargs[k])
except Exception as e:
warn(u"Unable to set config parameter %s in repository %s: %s" %
(k, self.name, e))
else:
debug(u"No such repository parameter: %s" % k)
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/tarball.py 0000664 0000000 0000000 00000007214 12131501173 0026427 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
'''
Tarball wrapper
'''
import os
import sys
import time
import tarfile
import StringIO
import re
import fnmatch
from installsystems.exception import *
class Tarball(tarfile.TarFile):
def add_str(self, name, content, ftype, mode):
'''
Add a string in memory as a file in tarball
'''
ti = tarfile.TarInfo(name)
ti.type = ftype
ti.mode = mode
ti.mtime = int(time.time())
ti.uid = ti.gid = 0
ti.uname = ti.gname = "root"
# unicode char is encoded in UTF-8, has changelog must be in UTF-8
if isinstance(content, unicode):
content = content.encode("UTF-8")
ti.size = len(content) if content is not None else 0
self.addfile(ti, StringIO.StringIO(content))
def get_str(self, name):
'''
Return a string from a filename in a tarball
'''
ti = self.getmember(name)
fd = self.extractfile(ti)
return fd.read() if fd is not None else ""
def get_utf8(self, name):
'''
Return an unicode string from a file encoded in UTF-8 inside tarball
'''
try:
return unicode(self.get_str(name), "UTF-8")
except UnicodeDecodeError:
raise ISError(u"Invalid UTF-8 character in %s" % name)
def getnames(self, re_pattern=None, glob_pattern=None, dir=True):
names = super(Tarball, self).getnames()
# regexp matching
if re_pattern is not None:
names = filter(lambda x: re.match(re_pattern, x), names)
# globbing matching
if glob_pattern is not None:
names = fnmatch.filter(names, glob_pattern)
# dir filering
if not dir:
names = filter(lambda x: not self.getmember(x).isdir(), names)
return names
def size(self):
'''
Return real (uncompressed) size of the tarball
'''
total_sz = 0
for ti in self.getmembers():
total_sz += ti.size
return total_sz
def chown(self, tarinfo, targetpath):
'''
Override real chown method from tarfile which make crazy check about
uid/gid before chowning. This leads to bug when a uid/gid doesn't
exitsts on the running system
This overide as a sexy side effect which allow badly create tarball
(whithout --numeric-owner) to be extracted correctly
This was reported upstream: http://bugs.python.org/issue12841
'''
if hasattr(os, "geteuid") and os.geteuid() == 0:
# We have to be root to do so.
try:
if tarinfo.issym() and hasattr(os, "lchown"):
os.lchown(targetpath, tarinfo.uid, tarinfo.gid)
else:
if sys.platform != "os2emx":
os.chown(targetpath, tarinfo.uid, tarinfo.gid)
except EnvironmentError, e:
raise ExtractError("could not change owner")
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/template.py 0000664 0000000 0000000 00000005666 12131501173 0026632 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
description = u"""[image]
name = %(name)s
version = %(version)s
description = %(description)s
author = %(author)s
is_min_version = %(is_min_version)s
"""
changelog = u"""[1]
- Initial version
"""
build = u"""# -*- python -*-
# -*- coding: utf-8 -*-
# global rebuild object allow you to force rebuild of payloads
# to force rebuild of payload nammed rootfs add it to the rebuild list
# rebuild list is empty by default
#rebuild += ["rootfs"]
# vim:set ts=2 sw=2 noet:
"""
parser = """# -*- python -*-
# -*- coding: utf-8 -*-
# global image object is a reference to current image
# global parser object is your installsystems subparser (argparse)
import os
import argparse
from installsystems.printer import arrow
class TargetAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
if not os.path.isdir(values):
raise Exception(u"Invalid target directory %s" % values)
namespace.target = values
parser.add_argument("-n", "--hostname", dest="hostname", type=str, required=True)
parser.add_argument("target", type=str, action=TargetAction,
help="target installation directory")
# vim:set ts=2 sw=2 noet:
"""
setup = u"""# -*- python -*-
# -*- coding: utf-8 -*-
# global image object is a reference to current image
# namespace object is the persistant, it can be used to store data accross scripts
from installsystems.printer import arrow
arrow(u"hostname: %s" % namespace.hostname)
# uncomment to extract payload named root in namespace.target directory
#image.payload["rootfs"].extract(namespace.target)
# vim:set ts=2 sw=2 noet:
"""
createdb = u"""
CREATE TABLE image (md5 TEXT NOT NULL PRIMARY KEY,
name TEXT NOT NULL,
version INTEGER NOT NULL,
date INTEGER NOT NULL,
author TEXT,
description TEXT,
size INTEGER NOT NULL,
UNIQUE(name, version));
CREATE TABLE payload (md5 TEXT NOT NULL,
image_md5 TEXT NOT NULL REFERENCES image(md5),
name TEXT NOT NULL,
isdir INTEGER NOT NULL,
size INTEGER NOT NULL,
PRIMARY KEY(md5, image_md5));
"""
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/installsystems/tools.py 0000664 0000000 0000000 00000056410 12131501173 0026150 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
'''
InstallSystems Generic Tools Library
'''
import hashlib
import imp
import jinja2
import locale
import math
import os
import re
import shutil
import socket
import time
import urllib2
from subprocess import call, check_call, CalledProcessError
import installsystems
from progressbar import Widget, ProgressBar, Percentage
from progressbar import FileTransferSpeed
from progressbar import Bar, BouncingBar, ETA, UnknownLength
from installsystems.tarball import Tarball
from installsystems.exception import *
from installsystems.printer import *
################################################################################
# Classes
################################################################################
class PipeFile(object):
'''
Pipe file object if a file object with extended capabilities
like printing progress bar or compute file size, md5 on the fly
'''
class FileTransferSize(Widget):
'''
Custom progressbar widget
Widget for showing the transfer size (useful for file transfers)
'''
format = '%6.2f %s%s'
prefixes = ' kMGTPEZY'
__slots__ = ('unit', 'format')
def __init__(self, unit='B'):
self.unit = unit
def update(self, pbar):
'''
Updates the widget with the current SI prefixed speed
'''
if pbar.currval < 2e-6: # =~ 0
scaled = power = 0
else:
power = int(math.log(pbar.currval, 1000))
scaled = pbar.currval / 1000.**power
return self.format % (scaled, self.prefixes[power], self.unit)
def __init__(self, path=None, mode="r", fileobj=None, timeout=None,
progressbar=False):
self.open(path, mode, fileobj, timeout, progressbar)
def open(self, path=None, mode="r", fileobj=None, timeout=None, progressbar=False):
if path is None and fileobj is None:
raise AttributeError("You must have a path or a fileobj to open")
if mode not in ("r", "w"):
raise AttributeError("Invalid open mode. Must be r or w")
self.timeout = timeout or socket.getdefaulttimeout()
self.mode = mode
self._md5 = hashlib.md5()
self.size = 0
self.mtime = None
self.consumed_size = 0
# we already have a fo, nothing to open
if fileobj is not None:
self.fo = fileobj
# seek to 0 and compute filesize if we have and fd
if hasattr(self.fo, "fileno"):
self.seek(0)
self.size = os.fstat(self.fo.fileno()).st_size
# we need to open the path
else:
ftype = pathtype(path)
if ftype == "file":
self._open_local(path)
elif ftype == "http":
self._open_http(path)
elif ftype == "ftp":
self._open_ftp(path)
elif ftype == "ssh":
self._open_ssh(path)
else:
raise IOError("URL type not supported")
# init progress bar
# we use 0 because a null file is cannot show a progression during write
if self.size == 0:
widget = [ self.FileTransferSize(), " ",
BouncingBar(), " ", FileTransferSpeed() ]
maxval = UnknownLength
else:
widget = [ Percentage(), " ", Bar(), " ", FileTransferSpeed(), " ", ETA() ]
maxval = self.size
self._progressbar = ProgressBar(widgets=widget, maxval=maxval)
# enable displaying of progressbar
self.progressbar = progressbar
def _open_local(self, path):
'''
Open file on the local filesystem
'''
self.fo = open(path, self.mode)
sta = os.fstat(self.fo.fileno())
self.size = sta.st_size
self.mtime = sta.st_mtime
def _open_http(self, path):
'''
Open a file accross an http server
'''
try:
self.fo = urllib2.urlopen(path, timeout=self.timeout)
except Exception as e:
# FIXME: unable to open file
raise IOError(e)
# get file size
if "Content-Length" in self.fo.headers:
self.size = int(self.fo.headers["Content-Length"])
else:
self.size = 0
# get mtime
try:
self.mtime = int(time.mktime(time.strptime(self.fo.headers["Last-Modified"],
"%a, %d %b %Y %H:%M:%S %Z")))
except:
self.mtime = None
def _open_ftp(self, path):
'''
Open file via ftp
'''
try:
self.fo = urllib2.urlopen(path, timeout=self.timeout)
except Exception as e:
# FIXME: unable to open file
raise IOError(e)
# get file size
try:
self.size = int(self.fo.headers["content-length"])
except:
self.size = 0
def _open_ssh(self, path):
'''
Open current fo from an ssh connection
'''
# try to load paramiko
try:
import paramiko
except ImportError:
raise IOError("URL type not supported")
# parse url
(login, passwd, host, port, path) = re.match(
"ssh://(([^:]+)(:([^@]+))?@)?([^/:]+)(:(\d+))?(/.*)?", path).group(2, 4, 5, 7, 8)
if port is None: port = 22
if path is None: path = "/"
try:
# open ssh connection
# we need to keep it inside the object unless it was cutted
self._ssh = paramiko.SSHClient()
self._ssh.load_system_host_keys()
self._ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Here there is a bug arround conect with allow_agent if agent is not able to open with a key
self._ssh.connect(host, port=port, username=login, password=passwd, allow_agent=True,
look_for_keys=True, timeout=self.timeout)
# swith in sftp mode
sftp = self._ssh.open_sftp()
# get the file infos
sta = sftp.stat(path)
self.size = sta.st_size
self.mtime = sta.st_mtime
# open the file
self.fo = sftp.open(path, self.mode)
# this is needed to have correct file transfert speed
self.fo.set_pipelined(True)
except Exception as e:
# FIXME: unable to open file
raise IOError(e)
def close(self):
if self.progressbar:
self._progressbar.finish()
debug(u"MD5: %s" % self.md5)
debug(u"Size: %s" % self.consumed_size)
self.fo.close()
def read(self, size=None):
if self.mode == "w":
raise IOError("Unable to read in w mode")
buf = self.fo.read(size)
length = len(buf)
self._md5.update(buf)
self.consumed_size += length
if self.progressbar and length > 0:
self._progressbar.update(self.consumed_size)
return buf
def flush(self):
if hasattr(self.fo, "flush"):
return self.fo.flush()
def write(self, buf):
if self.mode == "r":
raise IOError("Unable to write in r mode")
self.fo.write(buf)
length = len(buf)
self._md5.update(buf)
self.consumed_size += length
if self.progressbar and length > 0:
self._progressbar.update(self.consumed_size)
return None
def consume(self, fo=None):
'''
if PipeFile is in read mode:
Read all data from PipeFile and write it to fo
if fo is None, data are discarded. This is useful to obtain md5 and size
if PipeFile is in write mode:
Read all data from fo and write it to PipeFile
'''
if self.mode == "w":
if fo is None:
raise TypeError("Unable to consume NoneType")
while True:
buf = fo.read(1048576) # 1MiB
if len(buf) == 0:
break
self.write(buf)
else:
while True:
buf = self.read(1048576) # 1MiB
if len(buf) == 0:
break
if fo is not None:
fo.write(buf)
@property
def progressbar(self):
'''
Return is progressbar have been started
'''
return hasattr(self, "_progressbar_started")
@progressbar.setter
def progressbar(self, val):
'''
Set this property to true enable progress bar
'''
if installsystems.verbosity == 0:
return
if val == True and not hasattr(self, "_progressbar_started"):
self._progressbar_started = True
self._progressbar.start()
@property
def md5(self):
'''
Return the md5 of read/write of the file
'''
return self._md5.hexdigest()
@property
def read_size(self):
'''
Return the current read size
'''
return self.consumed_size
@property
def write_size(self):
'''
Return the current wrote size
'''
return self.consumed_size
################################################################################
# Functions
################################################################################
def smd5sum(buf):
'''
Compute md5 of a string
'''
if isinstance(buf, unicode):
buf = buf.encode(locale.getpreferredencoding())
m = hashlib.md5()
m.update(buf)
return m.hexdigest()
def mkdir(path, uid=None, gid=None, mode=None):
'''
Create a directory and set rights
'''
os.makedirs(path)
chrights(path, uid, gid, mode)
def chrights(path, uid=None, gid=None, mode=None, mtime=None):
'''
Set rights on a file
'''
if uid is not None:
os.chown(path, uid, -1)
if gid is not None:
os.chown(path, -1, gid)
if mode is not None:
os.chmod(path, mode)
if mtime is not None:
os.utime(path, (mtime, mtime))
def pathtype(path):
'''
Return path type. This is useful to know what kind of path is given
'''
if path.startswith("http://") or path.startswith("https://"):
return "http"
if path.startswith("ftp://") or path.startswith("ftps://"):
return "ftp"
elif path.startswith("ssh://"):
return "ssh"
else:
return "file"
def pathsearch(name, path=None):
'''
Search PATH for a binary
'''
path = path or os.environ["PATH"]
for d in path.split(os.pathsep):
if os.path.exists(os.path.join(d, name)):
return os.path.join(os.path.abspath(d), name)
return None
def isfile(path):
'''
Return True if path is of type file
'''
return pathtype(path) == "file"
def abspath(path):
'''
Format a path to be absolute
'''
ptype = pathtype(path)
if ptype in ("http", "ftp", "ssh"):
return path
elif ptype == "file":
if path.startswith("file://"):
path = path[len("file://"):]
return os.path.abspath(path)
else:
return None
def getsize(path):
'''
Get size of a path. Recurse if directory
'''
total_sz = os.path.getsize(path)
if os.path.isdir(path):
for root, dirs, files in os.walk(path):
for filename in dirs + files:
filepath = os.path.join(root, filename)
filestat = os.lstat(filepath)
if stat.S_ISDIR(filestat.st_mode) or stat.S_ISREG(filestat.st_mode):
total_sz += filestat.st_size
return total_sz
def human_size(num, unit='B'):
'''
Return human readable size
'''
prefixes = ('','Ki', 'Mi', 'Gi', 'Ti','Pi', 'Ei', 'Zi', 'Yi')
power = int(math.log(num, 1024))
# max is YiB
if power >= len(prefixes):
power = len(prefixes) - 1
scaled = num / float(1024 ** power)
return u"%3.1f%s%s" % (scaled, prefixes[power], unit)
def time_rfc2822(timestamp):
'''
Return a rfc2822 format time string from an unix timestamp
'''
return time.strftime("%a, %d %b %Y %H:%M:%S %z", time.gmtime(timestamp))
def guess_distro(path):
'''
Try to detect which distro is inside a directory
'''
if os.path.exists(os.path.join(path, "etc/debian_version")):
return "debian"
elif os.path.exists(os.path.join(path, "etc/arch-release")):
return "archlinux"
return None
def prepare_chroot(path, mount=True):
'''
Prepare a chroot environment by mounting /{proc,sys,dev,dev/pts}
and try to guess dest os to avoid daemon launching
'''
# try to mount /proc /sys /dev /dev/pts /dev/shm
if mount:
mps = ("proc", "sys", "dev", "dev/pts", "dev/shm")
arrow("Mounting filesystems")
for mp in mps:
origin = u"/%s" % mp
target = os.path.join(path, mp)
if os.path.ismount(target):
warn(u"%s is already a mountpoint, skipped" % target)
elif os.path.ismount(origin) and os.path.isdir(target):
arrow(u"%s -> %s" % (origin, target), 1)
try:
check_call(["mount", "--bind", origin, target], close_fds=True)
except CalledProcessError as e:
warn(u"Mount failed: %s.\n" % e)
arrow("Tricks")
exists = os.path.exists
join = os.path.join
# check path is a kind of linux FHS
if not exists(join(path, "etc")) or not exists(join(path, "usr")):
return
# trick resolv.conf
try:
resolv_path = join(path, "etc", "resolv.conf")
resolv_backup_path = join(path, "etc", "resolv.conf.isbackup")
resolv_trick_path = join(path, "etc", "resolv.conf.istrick")
if (exists("/etc/resolv.conf")
and not exists(resolv_backup_path)
and not exists(resolv_trick_path)):
arrow("resolv.conf", 1)
if exists(resolv_path):
os.rename(resolv_path, resolv_backup_path)
else:
open(resolv_trick_path, "wb")
shutil.copy("/etc/resolv.conf", resolv_path)
except Exception as e:
warn(u"resolv.conf tricks fail: %s" % e)
# trick mtab
try:
mtab_path = join(path, "etc", "mtab")
mtab_backup_path = join(path, "etc", "mtab.isbackup")
mtab_trick_path = join(path, "etc", "mtab.istrick")
if not exists(mtab_backup_path) and not exists(mtab_trick_path):
arrow("mtab", 1)
if os.path.exists(mtab_path):
os.rename(mtab_path, mtab_backup_path)
os.symlink("/proc/self/mounts", mtab_path)
except Exception as e:
warn(u"mtab tricks fail: %s" % e)
# try to guest distro
distro = guess_distro(path)
# in case of debian disable policy
if distro == "debian":
arrow("Debian specific", 1)
# create a chroot header
try: open(join(path, "etc", "debian_chroot"), "w").write("CHROOT")
except: pass
# fake policy-rc.d. It must exit 101, it's an expected exitcode.
policy_path = join(path, "usr", "sbin", "policy-rc.d")
try: open(policy_path, "w").write("#!/bin/bash\nexit 101\n")
except: pass
# policy-rc.d needs to be executable
chrights(policy_path, mode=0755)
def unprepare_chroot(path, mount=True):
'''
Rollback preparation of a chroot environment inside a directory
'''
arrow("Untricks")
exists = os.path.exists
join = os.path.join
# check path is a kind of linux FHS
if exists(os.path.join(path, "etc")) and exists(os.path.join(path, "usr")):
# untrick mtab
mtab_path = join(path, "etc", "mtab")
mtab_backup_path = join(path, "etc", "mtab.isbackup")
mtab_trick_path = join(path, "etc", "mtab.istrick")
if exists(mtab_backup_path) or exists(mtab_trick_path):
arrow("mtab", 1)
# order matter !
if exists(mtab_trick_path):
try: os.unlink(mtab_path)
except OSError: pass
try:
os.unlink(mtab_trick_path)
except OSError:
warn(u"Unable to remove %s" % mtab_trick_path)
if exists(mtab_backup_path):
try: os.unlink(mtab_path)
except OSError: pass
try:
os.rename(mtab_backup_path, mtab_path)
except OSError:
warn(u"Unable to restore %s" % mtab_backup_path)
# untrick resolv.conf
resolv_path = join(path, "etc", "resolv.conf")
resolv_backup_path = join(path, "etc", "resolv.conf.isbackup")
resolv_trick_path = join(path, "etc", "resolv.conf.istrick")
if exists(resolv_backup_path) or exists(resolv_trick_path):
arrow("resolv.conf", 1)
# order matter !
if exists(resolv_trick_path):
try: os.unlink(resolv_path)
except OSError: pass
try:
os.unlink(resolv_trick_path)
except OSError:
warn(u"Unable to remove %s" % resolv_trick_path)
if exists(resolv_backup_path):
try: os.unlink(resolv_path)
except OSError: pass
try:
os.rename(resolv_backup_path, resolv_path)
except OSError:
warn(u"Unable to restore %s" % resolv_backup_path)
# try to guest distro
distro = guess_distro(path)
# cleaning debian stuff
if distro == "debian":
arrow("Debian specific", 1)
for f in ("etc/debian_chroot", "usr/sbin/policy-rc.d"):
try: os.unlink(join(path, f))
except: pass
# unmounting
if mount:
mps = ("proc", "sys", "dev", "dev/pts", "dev/shm")
arrow("Unmounting filesystems")
for mp in reversed(mps):
target = join(path, mp)
if os.path.ismount(target):
arrow(target, 1)
call(["umount", target], close_fds=True)
def chroot(path, shell="/bin/bash", mount=True):
'''
Chroot inside a directory and call shell
if mount is true, mount /{proc,dev,sys} inside the chroot
'''
# prepare to chroot
prepare_chroot(path, mount)
# chrooting
arrow(u"Chrooting inside %s and running %s" % (path, shell))
call(["chroot", path, shell], close_fds=True)
# revert preparation of chroot
unprepare_chroot(path, mount)
def is_version(version):
'''
Check if version is valid
'''
if re.match("^(\d+)(?:([-~+]).*)?$", version) is None:
raise TypeError(u"Invalid version format %s" % buf)
def compare_versions(v1, v2):
'''
This function compare version :param v1: and version :param v2:
Compare v1 and v2
return > 0 if v1 > v2
return < 0 if v2 > v1
return = 0 if v1 == v2
'''
def get_ver(version):
'''Return float version'''
if type(version) is int or type(version) is float:
return float(version)
elif isinstance(version, basestring):
iv = re.match("^(\d+)(?:([-~+]).*)?$", version)
if iv is None:
raise TypeError(u"Invalid version format: %s" % version)
rv = float(iv.group(1))
if iv.group(2) == "~":
rv -= 0.1
else:
rv += 0.1
return rv
else:
raise TypeError(u"Invalid version format: %s" % version)
fv1 = get_ver(v1)
fv2 = get_ver(v2)
return fv1 - fv2
def get_compressor_path(name, compress=True, level=None):
'''
Return better compressor argv from its generic compressor name
e.g: bzip2 can return pbzip2 if available or bzip2 if not
'''
compressors = {"gzip": [["gzip", "--no-name", "--stdout"]],
"bzip2": [["pbzip2", "--stdout"],
["bzip2", "--compress", "--stdout"]],
"xz": [["xz", "--compress", "--stdout"]]}
decompressors = {"gzip": [["gzip", "--decompress", "--stdout"]],
"bzip2": [["pbzip2","--decompress", "--stdout"],
["bzip2", "--decompress", "--stdout"]],
"xz": [["xz", "--decompress", "--stdout"]]}
# no compress level for decompression
if not compress:
level = None
allcompressors = compressors if compress else decompressors
# check compressor exists
if name not in allcompressors.keys():
raise ISError(u"Invalid compressor name: %s" % name)
# get valid compressors
for compressor in allcompressors[name]:
path = pathsearch(compressor[0])
if path is None:
continue
if level is not None:
compressor.append("-%d" % level)
return compressor
raise ISError(u"No external decompressor for %s" % name)
def render_templates(target, context, tpl_ext=".istpl", force=False, keep=False):
'''
Render templates according to tpl_ext
Apply template mode/uid/gid to the generated file
'''
for path in os.walk(target):
for filename in path[2]:
name, ext = os.path.splitext(filename)
if ext == tpl_ext:
tpl_path = os.path.join(path[0], filename)
file_path = os.path.join(path[0], name)
arrow(tpl_path)
if os.path.exists(file_path) and not force:
raise ISError(u"%s will be overwritten, cancel template "
"generation (set force=True if you know "
"what you do)" % file_path)
try:
with open(tpl_path) as tpl_file:
template = jinja2.Template(tpl_file.read())
with open(file_path, "w") as rendered_file:
rendered_file.write(template.render(context))
except Exception as e:
raise ISError(u"Render template fail", e)
st = os.stat(tpl_path)
os.chown(file_path, st.st_uid, st.st_gid)
os.chmod(file_path, st.st_mode)
if not keep:
os.unlink(tpl_path)
def string2module(name, string, filename):
'''
Create a python module from a string
'''
# create an empty module
module = imp.new_module(name)
# compile module code
try:
bytecode = compile(string, filename, "exec")
except Exception as e:
raise ISError(u"Unable to compile %s" % filename, e)
# Load module
try:
exec bytecode in module.__dict__
except Exception as e:
raise ISError(u"Unable to load %s" % filename, e)
return module
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/samples/ 0000775 0000000 0000000 00000000000 12131501173 0022776 5 ustar 00root root 0000000 0000000 installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/samples/installsystems.conf 0000664 0000000 0000000 00000001363 12131501173 0026746 0 ustar 00root root 0000000 0000000 # installsystems main configuration
[installsystems]
# Set verbosity (0: quiet, 1: normal, 2: debug)
#verbosity = 2
# Set nice process value (see nice (1))
#nice = 0
# Set ionice class (see ionice (1))
#ionice_class = idle
# Set ionice class data level (see ionice (1))
#ionice_level = 0
# define a custom cache directory
#cache = /tmp/sex
# disable cache of remote repository
#no_cache = 1
# disable output coloring
#no_color = 1
# disable check of script during build
#no_check = 1
# global connection timeout
#timeout = 30
# search images inside repositories
#repo_search = stable testing
# filter repository list
#repo_filter = stable testing
# custom repository config file
#repo_config =
# repository loading timeout
#repo_timeout = 1
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/samples/repository.conf 0000664 0000000 0000000 00000000477 12131501173 0026074 0 ustar 00root root 0000000 0000000 # repository configuration
# local repository
#[local]
#path = /home/seblu/pub/is
#fmod = 644
#dmod = 755
#uid = seblu
#gid = sebgp
# localhost testing repository
#[http]
#path = http://127.0.0.1/is
# smartjog official installsystems repository
#[smartjog]
#path = http://installsystems.boot.wan/is
#offline = False
installsystems-7044e2062fed984cf91509b9d689493b5543a3f1/setup.py 0000664 0000000 0000000 00000003166 12131501173 0023052 0 ustar 00root root 0000000 0000000 # -*- python -*-
# -*- coding: utf-8 -*-
# This file is part of Installsystems.
#
# Installsystems is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Installsystems is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Installsystems. If not, see .
from setuptools import setup
import os
import sys
import installsystems
import subprocess
# Build manpage
subprocess.call(['rst2man', 'doc/is.1.rst', 'doc/is.1'])
# Retrieval of version
ldesc = open(os.path.join(os.path.dirname(__file__), 'README')).read()
setup(
name=installsystems.canonical_name,
version=installsystems.version,
description='InstallSystems',
long_description=ldesc,
author='Sébastien Luttringer',
author_email='sebastien.luttringer@smartjog.com',
license='LGPL3',
packages=[ 'installsystems' ],
scripts=[ 'bin/is' ],
data_files=(
('/etc/installsystems/', ('samples/repository.conf',
'samples/installsystems.conf')),
('/etc/bash_completion.d/', ('completion/bash/is',)),
),
classifiers=[
'Operating System :: Unix',
'Programming Language :: Python',
],
)