Initial commit

This commit is contained in:
Quentin Machu 2015-11-13 13:44:41 -05:00
commit d3fcb465a3
1047 changed files with 219060 additions and 0 deletions

7
.dockerignore Normal file
View File

@ -0,0 +1,7 @@
.*
*.md
DCO
LICENSE
NOTICE
docs
cloudconfig

71
CONTRIBUTING.md Executable file
View File

@ -0,0 +1,71 @@
# How to Contribute
CoreOS projects are [Apache 2.0 licensed](LICENSE) and accept contributions via
GitHub pull requests. This document outlines some of the conventions on
development workflow, commit message formatting, contact points and other
resources to make it easier to get your contribution accepted.
# Certificate of Origin
By contributing to this project you agree to the Developer Certificate of
Origin (DCO). This document was created by the Linux Kernel community and is a
simple statement that you, as a contributor, have the legal right to make the
contribution. See the [DCO](DCO) file for details.
# Email and Chat
The project currently uses the general CoreOS email list and IRC channel:
- Email: [coreos-dev](https://groups.google.com/forum/#!forum/coreos-dev)
- IRC: #[coreos](irc://irc.freenode.org:6667/#coreos) IRC channel on freenode.org
Please avoid emailing maintainers found in the MAINTAINERS file directly. They
are very busy and read the mailing lists.
## Getting Started
- Fork the repository on GitHub
- Read the [README](README.md) for build and test instructions
- Play with the project, submit bugs, submit patches!
## Contribution Flow
This is a rough outline of what a contributor's workflow looks like:
- Create a topic branch from where you want to base your work (usually master).
- Make commits of logical units.
- Make sure your commit messages are in the proper format (see below).
- Push your changes to a topic branch in your fork of the repository.
- Make sure the tests pass, and add any new tests as appropriate.
- Submit a pull request to the original repository.
Thanks for your contributions!
### Format of the Commit Message
We follow a rough convention for commit messages that is designed to answer two
questions: what changed and why. The subject line should feature the what and
the body of the commit should describe the why.
```
scripts: add the test-cluster command
this uses tmux to setup a test cluster that you can easily kill and
start for debugging.
Fixes #38
```
The format can be described more formally as follows:
```
<subsystem>: <what changed>
<BLANK LINE>
<why this change was made>
<BLANK LINE>
<footer>
```
The first line is the subject and should be no longer than 70 characters, the
second line is always blank, and other lines should be wrapped at 80 characters.
This allows the message to be easier to read on GitHub as well as in various
git tools.

36
DCO Executable file
View File

@ -0,0 +1,36 @@
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

18
Dockerfile Normal file
View File

@ -0,0 +1,18 @@
FROM golang:1.5
MAINTAINER Quentin Machu <quentin.machu@coreos.com>
RUN apt-get update && apt-get install -y bzr rpm && apt-get autoremove -y && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN mkdir /db
VOLUME /db
EXPOSE 6060 6061
ADD . /go/src/github.com/coreos/quay-sec/
WORKDIR /go/src/github.com/coreos/quay-sec/
ENV GO15VENDOREXPERIMENT 1
RUN go install -v
RUN go test $(go list ./... | grep -v /vendor/) # https://github.com/golang/go/issues/11659
ENTRYPOINT ["quay-sec"]

146
Godeps/Godeps.json generated Normal file
View File

@ -0,0 +1,146 @@
{
"ImportPath": "github.com/coreos/quay-sec",
"GoVersion": "go1.5.1",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "github.com/alecthomas/kingpin",
"Comment": "v2.1.1",
"Rev": "b5101a19548bb5ca25d38bb60f60697c0621a473"
},
{
"ImportPath": "github.com/alecthomas/template",
"Rev": "b867cc6ab45cece8143cfcc6fc9c77cf3f2c23c0"
},
{
"ImportPath": "github.com/alecthomas/units",
"Rev": "6b4e7dc5e3143b85ea77909c72caf89416fc2915"
},
{
"ImportPath": "github.com/badgerodon/peg",
"Rev": "9e5f7f4d07ca576562618c23e8abadda278b684f"
},
{
"ImportPath": "github.com/barakmich/glog",
"Rev": "fafcb6128a8a2e6360ff034091434d547397d54a"
},
{
"ImportPath": "github.com/boltdb/bolt",
"Comment": "v1.0-98-gafceb31",
"Rev": "afceb316b96ea97cbac6d23afbdf69543d80748a"
},
{
"ImportPath": "github.com/codegangsta/negroni",
"Comment": "v0.1-70-gc7477ad",
"Rev": "c7477ad8e330bef55bf1ebe300cf8aa67c492d1b"
},
{
"ImportPath": "github.com/coreos/go-systemd/journal",
"Comment": "v3-15-gcfa48f3",
"Rev": "cfa48f34d8dc4ff58f9b48725181a09f9092dc3c"
},
{
"ImportPath": "github.com/coreos/pkg/capnslog",
"Rev": "42a8c3b1a6f917bb8346ef738f32712a7ca0ede7"
},
{
"ImportPath": "github.com/coreos/pkg/timeutil",
"Rev": "42a8c3b1a6f917bb8346ef738f32712a7ca0ede7"
},
{
"ImportPath": "github.com/gogo/protobuf/proto",
"Rev": "58bbd41c1a2d1b7154f5d99a8d0d839b3093301a"
},
{
"ImportPath": "github.com/golang/protobuf/proto",
"Rev": "deb4a5e3b15dea23f340a311eea995421845c356"
},
{
"ImportPath": "github.com/google/cayley",
"Rev": "cdf0154d1a34019651eb4f46ce666b31f4d8cae7"
},
{
"ImportPath": "github.com/julienschmidt/httprouter",
"Rev": "8c199fb6259ffc1af525cc3ad52ee60ba8359669"
},
{
"ImportPath": "github.com/lib/pq",
"Comment": "go1.0-cutoff-56-gdc50b6a",
"Rev": "dc50b6ad2d3ee836442cf3389009c7cd1e64bb43"
},
{
"ImportPath": "github.com/onsi/ginkgo",
"Comment": "v1.2.0-22-g39d2c24",
"Rev": "39d2c24f8a92c88f7e7f4d8ec6c80d3cc8f5ac65"
},
{
"ImportPath": "github.com/onsi/gomega",
"Comment": "v1.0-71-g2152b45",
"Rev": "2152b45fa28a361beba9aab0885972323a444e28"
},
{
"ImportPath": "github.com/pborman/uuid",
"Rev": "ca53cad383cad2479bbba7f7a1a05797ec1386e4"
},
{
"ImportPath": "github.com/peterh/liner",
"Rev": "1bb0d1c1a25ed393d8feb09bab039b2b1b1fbced"
},
{
"ImportPath": "github.com/robertkrimen/otto",
"Rev": "7597815bd01ab01ae085f610303aa550ca38189d"
},
{
"ImportPath": "github.com/russross/blackfriday",
"Comment": "v1.2-80-g8cec3a8",
"Rev": "8cec3a854e68dba10faabbe31c089abf4a3e57a6"
},
{
"ImportPath": "github.com/shurcooL/sanitized_anchor_name",
"Rev": "244f5ac324cb97e1987ef901a0081a77bfd8e845"
},
{
"ImportPath": "github.com/stretchr/testify/assert",
"Comment": "v1.0-17-g089c718",
"Rev": "089c7181b8c728499929ff09b62d3fdd8df8adff"
},
{
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
"Rev": "315fcfb05d4d46d4354b313d146ef688dda272a9"
},
{
"ImportPath": "github.com/syndtr/gosnappy/snappy",
"Rev": "156a073208e131d7d2e212cb749feae7c339e846"
},
{
"ImportPath": "github.com/tylerb/graceful",
"Comment": "v1.2.3",
"Rev": "48afeb21e2fcbcff0f30bd5ad6b97747b0fae38e"
},
{
"ImportPath": "golang.org/x/net/netutil",
"Rev": "7654728e381988afd88e58cabfd6363a5ea91810"
},
{
"ImportPath": "gopkg.in/alecthomas/kingpin.v2",
"Comment": "v2.0.10",
"Rev": "e1f37920c1d0ced4d1c92f9526a2a433183f02e9"
},
{
"ImportPath": "gopkg.in/mgo.v2",
"Comment": "r2015.01.24-35-g01ee097",
"Rev": "01ee097136da162d1dd3c9b44fbdf3abf4fd6552"
},
{
"ImportPath": "gopkg.in/tomb.v2",
"Rev": "14b3d72120e8d10ea6e6b7f87f7175734b1faab8"
},
{
"ImportPath": "gopkg.in/tylerb/graceful.v1",
"Comment": "v1.2.1",
"Rev": "ac9ebe4f1ee151ac1eeeaef32957085cba64d508"
}
]
}

5
Godeps/Readme generated Normal file
View File

@ -0,0 +1,5 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

202
LICENSE Executable file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

5
NOTICE Executable file
View File

@ -0,0 +1,5 @@
CoreOS Project
Copyright 2015 CoreOS, Inc
This product includes software developed at CoreOS, Inc.
(http://www.coreos.com/).

83
README.md Normal file
View File

@ -0,0 +1,83 @@
Clair
=====
[![Docker Repository on Quay.io](https://quay.io/repository/coreos/clair/status "Docker Repository on Quay.io")](https://quay.io/repository/coreos/clair)
Clair is a container vulnerability analysis service. It provides the list of vulnerabilities that threaten each container and can sends notifications whenever new vulnerabilities that affect existing containers are released.
We named the project « Clair », which means in French *clear*, *bright*, *transparent* because we believe that it enables users to have a clear insight into the security of their container infrastructure.
## Why should I use Clair?
Clair is a single-binary server that exposes an JSON, HTTP API. It does not require any agent to sit on your containers neither does it need any specific container tweak to be done. It has been designed to perform massive analysis on the [Quay.io Container Registry](https://quay.io).
Whether you host a container registry, a continuous-integration system, or build dozens to thousands containers, you would benefit from Clair. More generally, if you consider that container security matters (and, honestly, you should), you should give it a shot.
## How Clair Detects Vulnerabilities
Clair has been designed to analyze a container layer only once, without running the container. The analysis has to extract all required data to detect the known vulnerabilities which may affect a layer but also any future vulnerabilities.
Detecting vulnerabilities can be achieved by several techniques. One possiblity is to compute hashes of binaries. These are presented on a layer and then compared with a database. However, building this database may become tricky considering the number of different packages and library versions.
To detect vulnerabilities Clair decided to take advantage of package managers, which quickly and comprehensively provide lists of installed binary and source packages. Package lists are extracted for each layer that composes of your container image, the difference between the layers package list, and its parent one is stored. Not only is this method storage-efficient, but it also enables us to scan a layer that may be used in many images only once. Coupled with vulnerability databases such as the Debians Security Bug Tracker, Clair is able to tell which vulnerabilities threaten a container, and which layer and package introduced them.
### Graph
Clair internally uses a graph, which has its model described in the [associated doc](docs/Model.md) to store and query data. Below is a non-exhaustive example graph that correspond to the following *Dockerfile*.
```
1. MAINTAINER Quentin Machu <quentin.machu@coreos.com>
2. FROM ubuntu:trusty
3. RUN aptget update && aptget upgrade y
4. EXPOSE 22
5. CMD ["/usr/sbin/sshd", "-D"]
```
![Example graph](docs/Model.png)
The above image shows five layers represented by the purple nodes, associated with their ids and parents. Because the second layer imports *Ubuntu Trusty* in the container, Clair can detect the operating system and some packages, in green (we only show one here for the sake of simplicity). The third layer upgrades packages, so the graph reflects that this layer removes the previous version and installs the new one. Finally, the graph knows about a vulnerability, drawn in red, which is fixed by a particular package. Note that two synthetic package versions exist (0 and ∞): they ensure database consistency during parallel modification. ∞ also allows us to define very easily that a vulnerability is not yet fixed; thus, it affects every package version.
Querying this particular graph will tell us that our image is not vulnerable at all because none of the successor versions of its only package fix any vulnerability. However, an image based on the second layer could be vulnerable.
### Architecture
Clair is divided into X main modules (which represent Go packages):
- **api** defines how users interact with Clair and exposes a [documented HTTP API](docs/API.md).
- **worker** extracts useful informations from layers and store everything in the database.
- **updater** periodically updates Clair's vulnerability database from known vulnerability sources.
- **notifier** dispatches [notifications](docs/Notifications.md) about vulnerable containers when vulnerabilities are released or updated.
- **database** persists layers informations and vulnerabilities in [Cayley graph database](https://github.com/google/cayley).
- **health** summarizes health checks of every Clair's services.
Multiple backend databases are supported, a testing deployment would use an in-memory storage while a production deployment should use [Bolt](https://github.com/boltdb/bolt) (single-instance deployment) or PostgreSQL (distributed deployment, probably behind a load-balancer). To learn more about how to run Clair, take a look at the [doc](docs/Run.md).
#### Detectors & Fetchers
Clair currently supports three operating systems and their package managers, which we believe are the most common ones: *Debian* (dpkg), *Ubuntu* (dpkg), *CentOS* (yum).
Supporting an operating system implies that we are able to extract the operating system's name and version from a layer and the list of package it has. This is done inside the *worker/detectors* package and extending that is straightforward.
All of this is useless if no vulnerability is known for any of these packages. The *updater/fetchers* package defines trusted sources of vulnerabilities, how to fetch them and parse them. For now, Clair uses three databases, one for each supported operating system:
- [Ubuntu CVE Tracker](https://launchpad.net/ubuntu-cve-tracker)
- [Debian Security Bug Tracker](https://security-tracker.debian.org/tracker/)
- [Red Hat Security Data](https://www.redhat.com/security/data/metrics/)
Using these distro-specific sources gives us confidence that Clair can take into consideration *all* the different package implementations and backports without ever reporting anything possibly inaccurate.
# Coming Soon
- Improved performances.
- Extended detection system
- More package managers
- Generic features such as detecting presence/absence of files
- ...
- Expose more informations about vulnerability
- Access vector
- Acess complexity
- ...
# Related links
- Talk @ ContainerDays NYC 2015 [[Slides]](https://docs.google.com/presentation/d/1toUKgqLyy1b-pZlDgxONLduiLmt2yaLR0GliBB7b3L0/pub?start=false&loop=false&slide=id.p) [[Video]](https://www.youtube.com/watch?v=PA3oBAgjnkU)
- [Quay](https://quay.io): First container registry using Clair.

126
api/api.go Normal file
View File

@ -0,0 +1,126 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package api provides a RESTful HTTP API, enabling external apps to interact
// with quay-sec.
package api
import (
"io/ioutil"
"net"
"net/http"
"strconv"
"time"
"crypto/tls"
"crypto/x509"
"github.com/coreos/pkg/capnslog"
"github.com/coreos/quay-sec/utils"
"github.com/tylerb/graceful"
)
var log = capnslog.NewPackageLogger("github.com/coreos/quay-sec", "api")
// Config represents the configuration for the Main API.
type Config struct {
Port int
TimeOut time.Duration
CertFile, KeyFile, CAFile string
}
// RunMain launches the main API, which exposes every possible interactions
// with quay-sec.
func RunMain(conf *Config, st *utils.Stopper) {
log.Infof("starting API on port %d.", conf.Port)
defer func() {
log.Info("API stopped")
st.End()
}()
srv := &graceful.Server{
Timeout: 0, // Already handled by our TimeOut middleware
NoSignalHandling: true, // We want to use our own Stopper
Server: &http.Server{
Addr: ":" + strconv.Itoa(conf.Port),
TLSConfig: setupClientCert(conf.CAFile),
Handler: NewVersionRouter(conf.TimeOut),
},
}
listenAndServeWithStopper(srv, st, conf.CertFile, conf.KeyFile)
}
// RunHealth launches the Health API, which only exposes a method to fetch
// quay-sec's health without any security or authentification mechanism.
func RunHealth(port int, st *utils.Stopper) {
log.Infof("starting Health API on port %d.", port)
defer func() {
log.Info("Health API stopped")
st.End()
}()
srv := &graceful.Server{
Timeout: 10 * time.Second, // Interrupt health checks when stopping
NoSignalHandling: true, // We want to use our own Stopper
Server: &http.Server{
Addr: ":" + strconv.Itoa(port),
Handler: NewHealthRouter(),
},
}
listenAndServeWithStopper(srv, st, "", "")
}
// listenAndServeWithStopper wraps graceful.Server's
// ListenAndServe/ListenAndServeTLS and adds the ability to interrupt them with
// the provided utils.Stopper
func listenAndServeWithStopper(srv *graceful.Server, st *utils.Stopper, certFile, keyFile string) {
go func() {
<-st.Chan()
srv.Stop(0)
}()
var err error
if certFile != "" && keyFile != "" {
log.Info("API: TLS Enabled")
err = srv.ListenAndServeTLS(certFile, keyFile)
} else {
err = srv.ListenAndServe()
}
if opErr, ok := err.(*net.OpError); !ok || (ok && opErr.Op != "accept") {
log.Fatal(err)
}
}
// setupClientCert creates a tls.Config instance using a CA file path
// (if provided) and and calls log.Fatal if it does not exist.
func setupClientCert(caFile string) *tls.Config {
if len(caFile) > 0 {
log.Info("API: Client Certificate Authentification Enabled")
caCert, err := ioutil.ReadFile(caFile)
if err != nil {
log.Fatal(err)
}
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM(caCert)
return &tls.Config{
ClientCAs: caCertPool,
ClientAuth: tls.RequireAndVerifyClientCert,
}
}
return &tls.Config{
ClientAuth: tls.NoClientCert,
}
}

78
api/jsonhttp/json.go Normal file
View File

@ -0,0 +1,78 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package jsonhttp provides helper functions to write JSON responses to
// http.ResponseWriter and read JSON bodies from http.Request.
package jsonhttp
import (
"encoding/json"
"io"
"net/http"
"github.com/coreos/quay-sec/database"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/coreos/quay-sec/worker"
)
// MaxPostSize is the maximum number of bytes that ParseBody reads from an
// http.Request.Body.
var MaxPostSize int64 = 1048576
// Render writes a JSON-encoded object to a http.ResponseWriter, as well as
// a HTTP status code.
func Render(w http.ResponseWriter, httpStatus int, v interface{}) {
w.WriteHeader(httpStatus)
if v != nil {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
result, _ := json.Marshal(v)
w.Write(result)
}
}
// RenderError writes an error, wrapped in the Message field of a JSON-encoded
// object to a http.ResponseWriter, as well as a HTTP status code.
// If the status code is 0, RenderError tries to guess the proper HTTP status
// code from the error type.
func RenderError(w http.ResponseWriter, httpStatus int, err error) {
if httpStatus == 0 {
httpStatus = http.StatusInternalServerError
// Try to guess the http status code from the error type
if _, isBadRequestError := err.(*cerrors.ErrBadRequest); isBadRequestError {
httpStatus = http.StatusBadRequest
} else {
switch err {
case cerrors.ErrNotFound:
httpStatus = http.StatusNotFound
case database.ErrTransaction, database.ErrBackendException:
httpStatus = http.StatusServiceUnavailable
case worker.ErrParentUnknown, worker.ErrUnsupported:
httpStatus = http.StatusBadRequest
}
}
}
Render(w, httpStatus, struct{ Message string }{Message: err.Error()})
}
// ParseBody reads a JSON-encoded body from a http.Request and unmarshals it
// into the provided object.
func ParseBody(r *http.Request, v interface{}) (int, error) {
defer r.Body.Close()
err := json.NewDecoder(io.LimitReader(r.Body, MaxPostSize)).Decode(v)
if err != nil {
return http.StatusUnsupportedMediaType, err
}
return 0, nil
}

54
api/logic/general.go Normal file
View File

@ -0,0 +1,54 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package logic implements all the available API methods.
// Every methods are documented in docs/API.md.
package logic
import (
"net/http"
"strconv"
"github.com/coreos/quay-sec/api/jsonhttp"
"github.com/coreos/quay-sec/health"
"github.com/coreos/quay-sec/worker"
"github.com/julienschmidt/httprouter"
)
// Version is an integer representing the API version.
const Version = 1
// GETVersions returns API and Engine versions.
func GETVersions(w http.ResponseWriter, r *http.Request, _ httprouter.Params) {
jsonhttp.Render(w, http.StatusOK, struct {
APIVersion string
EngineVersion string
}{
APIVersion: strconv.Itoa(Version),
EngineVersion: strconv.Itoa(worker.Version),
})
}
// GETHealth sums up the health of all the registered services.
func GETHealth(w http.ResponseWriter, r *http.Request, _ httprouter.Params) {
globalHealth, statuses := health.Healthcheck()
httpStatus := http.StatusOK
if !globalHealth {
httpStatus = http.StatusServiceUnavailable
}
jsonhttp.Render(w, httpStatus, statuses)
return
}

365
api/logic/layers.go Normal file
View File

@ -0,0 +1,365 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package logic
import (
"errors"
"net/http"
"strconv"
"github.com/coreos/quay-sec/api/jsonhttp"
"github.com/coreos/quay-sec/database"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/coreos/quay-sec/utils/types"
"github.com/coreos/quay-sec/worker"
"github.com/julienschmidt/httprouter"
)
// POSTLayersParameters represents the expected parameters for POSTLayers.
type POSTLayersParameters struct {
ID, Path, ParentID string
}
// POSTLayers analyzes a layer and returns the engine version that has been used
// for the analysis.
func POSTLayers(w http.ResponseWriter, r *http.Request, _ httprouter.Params) {
var parameters POSTLayersParameters
if s, err := jsonhttp.ParseBody(r, &parameters); err != nil {
jsonhttp.RenderError(w, s, err)
return
}
// Process data.
if err := worker.Process(parameters.ID, parameters.ParentID, parameters.Path); err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Get engine version and return.
jsonhttp.Render(w, http.StatusCreated, struct{ Version string }{Version: strconv.Itoa(worker.Version)})
}
// GETLayersOS returns the operating system of a layer if it exists.
// It uses not only the specified layer but also its parent layers if necessary.
// An empty OS string is returned if no OS has been detected.
func GETLayersOS(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
// Find layer.
layer, err := database.FindOneLayerByID(p.ByName("id"), []string{database.FieldLayerParent, database.FieldLayerOS})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Get OS.
os, err := layer.OperatingSystem()
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
jsonhttp.Render(w, http.StatusOK, struct{ OS string }{OS: os})
}
// GETLayersParent returns the parent ID of a layer if it exists.
// An empty ID string is returned if the layer has no parent.
func GETLayersParent(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
// Find layer
layer, err := database.FindOneLayerByID(p.ByName("id"), []string{database.FieldLayerParent})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Get layer's parent.
parent, err := layer.Parent([]string{database.FieldLayerID})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
ID := ""
if parent != nil {
ID = parent.ID
}
jsonhttp.Render(w, http.StatusOK, struct{ ID string }{ID: ID})
}
// GETLayersPackages returns the complete list of packages that a layer has
// if it exists.
func GETLayersPackages(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
// Find layer
layer, err := database.FindOneLayerByID(p.ByName("id"), []string{database.FieldLayerParent, database.FieldLayerPackages})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Find layer's packages.
packagesNodes, err := layer.AllPackages()
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
packages := []*database.Package{}
if len(packagesNodes) > 0 {
packages, err = database.FindAllPackagesByNodes(packagesNodes, []string{database.FieldPackageOS, database.FieldPackageName, database.FieldPackageVersion})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
}
jsonhttp.Render(w, http.StatusOK, struct{ Packages []*database.Package }{Packages: packages})
}
// GETLayersPackagesDiff returns the list of packages that a layer installs and
// removes if it exists.
func GETLayersPackagesDiff(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
// Find layer.
layer, err := database.FindOneLayerByID(p.ByName("id"), []string{database.FieldLayerPackages})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Find layer's packages.
installedPackages, removedPackages := make([]*database.Package, 0), make([]*database.Package, 0)
if len(layer.InstalledPackagesNodes) > 0 {
installedPackages, err = database.FindAllPackagesByNodes(layer.InstalledPackagesNodes, []string{database.FieldPackageOS, database.FieldPackageName, database.FieldPackageVersion})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
}
if len(layer.RemovedPackagesNodes) > 0 {
removedPackages, err = database.FindAllPackagesByNodes(layer.RemovedPackagesNodes, []string{database.FieldPackageOS, database.FieldPackageName, database.FieldPackageVersion})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
}
jsonhttp.Render(w, http.StatusOK, struct{ InstalledPackages, RemovedPackages []*database.Package }{InstalledPackages: installedPackages, RemovedPackages: removedPackages})
}
// GETLayersVulnerabilities returns the complete list of vulnerabilities that
// a layer has if it exists.
func GETLayersVulnerabilities(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
// Get minumum priority parameter.
minimumPriority := types.Priority(r.URL.Query().Get("minimumPriority"))
if minimumPriority == "" {
minimumPriority = "High" // Set default priority to High
} else if !minimumPriority.IsValid() {
jsonhttp.RenderError(w, 0, cerrors.NewBadRequestError("invalid priority"))
return
}
// Find layer
layer, err := database.FindOneLayerByID(p.ByName("id"), []string{database.FieldLayerParent, database.FieldLayerPackages})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Find layer's packages.
packagesNodes, err := layer.AllPackages()
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Find vulnerabilities.
vulnerabilities, err := getVulnerabilitiesFromLayerPackagesNodes(packagesNodes, minimumPriority, []string{database.FieldVulnerabilityID, database.FieldVulnerabilityLink, database.FieldVulnerabilityPriority, database.FieldVulnerabilityDescription})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
jsonhttp.Render(w, http.StatusOK, struct{ Vulnerabilities []*database.Vulnerability }{Vulnerabilities: vulnerabilities})
}
// GETLayersVulnerabilitiesDiff returns the list of vulnerabilities that a layer
// adds and removes if it exists.
func GETLayersVulnerabilitiesDiff(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
// Get minumum priority parameter.
minimumPriority := types.Priority(r.URL.Query().Get("minimumPriority"))
if minimumPriority == "" {
minimumPriority = "High" // Set default priority to High
} else if !minimumPriority.IsValid() {
jsonhttp.RenderError(w, 0, cerrors.NewBadRequestError("invalid priority"))
return
}
// Find layer.
layer, err := database.FindOneLayerByID(p.ByName("id"), []string{database.FieldLayerPackages})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Selected fields for vulnerabilities.
selectedFields := []string{database.FieldVulnerabilityID, database.FieldVulnerabilityLink, database.FieldVulnerabilityPriority, database.FieldVulnerabilityDescription}
// Find vulnerabilities for installed packages.
addedVulnerabilities, err := getVulnerabilitiesFromLayerPackagesNodes(layer.InstalledPackagesNodes, minimumPriority, selectedFields)
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Find vulnerabilities for removed packages.
removedVulnerabilities, err := getVulnerabilitiesFromLayerPackagesNodes(layer.RemovedPackagesNodes, minimumPriority, selectedFields)
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Remove vulnerabilities which appears both in added and removed lists (eg. case of updated packages but still vulnerable).
for ia, a := range addedVulnerabilities {
for ir, r := range removedVulnerabilities {
if a.ID == r.ID {
addedVulnerabilities = append(addedVulnerabilities[:ia], addedVulnerabilities[ia+1:]...)
removedVulnerabilities = append(removedVulnerabilities[:ir], removedVulnerabilities[ir+1:]...)
}
}
}
jsonhttp.Render(w, http.StatusOK, struct{ Adds, Removes []*database.Vulnerability }{Adds: addedVulnerabilities, Removes: removedVulnerabilities})
}
// POSTBatchLayersVulnerabilitiesParameters represents the expected parameters
// for POSTBatchLayersVulnerabilities.
type POSTBatchLayersVulnerabilitiesParameters struct {
LayersIDs []string
}
// POSTBatchLayersVulnerabilities returns the complete list of vulnerabilities
// that the provided layers have, if they all exist.
func POSTBatchLayersVulnerabilities(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
// Parse body
var parameters POSTBatchLayersVulnerabilitiesParameters
if s, err := jsonhttp.ParseBody(r, &parameters); err != nil {
jsonhttp.RenderError(w, s, err)
return
}
if len(parameters.LayersIDs) == 0 {
jsonhttp.RenderError(w, http.StatusBadRequest, errors.New("at least one LayerID query parameter must be provided"))
return
}
// Get minumum priority parameter.
minimumPriority := types.Priority(r.URL.Query().Get("minimumPriority"))
if minimumPriority == "" {
minimumPriority = "High" // Set default priority to High
} else if !minimumPriority.IsValid() {
jsonhttp.RenderError(w, 0, cerrors.NewBadRequestError("invalid priority"))
return
}
response := make(map[string]interface{})
// For each LayerID parameter
for _, layerID := range parameters.LayersIDs {
// Find layer
layer, err := database.FindOneLayerByID(layerID, []string{database.FieldLayerParent, database.FieldLayerPackages})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Find layer's packages.
packagesNodes, err := layer.AllPackages()
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Find vulnerabilities.
vulnerabilities, err := getVulnerabilitiesFromLayerPackagesNodes(packagesNodes, minimumPriority, []string{database.FieldVulnerabilityID, database.FieldVulnerabilityLink, database.FieldVulnerabilityPriority, database.FieldVulnerabilityDescription})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
response[layerID] = struct{ Vulnerabilities []*database.Vulnerability }{Vulnerabilities: vulnerabilities}
}
jsonhttp.Render(w, http.StatusOK, response)
}
// getSuccessorsFromPackagesNodes returns the node list of packages that have
// versions following the versions of the provided packages.
func getSuccessorsFromPackagesNodes(packagesNodes []string) ([]string, error) {
if len(packagesNodes) == 0 {
return []string{}, nil
}
// Get packages.
packages, err := database.FindAllPackagesByNodes(packagesNodes, []string{database.FieldPackageNextVersion})
if err != nil {
return []string{}, err
}
// Find all packages' successors.
var packagesNextVersions []string
for _, pkg := range packages {
nextVersions, err := pkg.NextVersions([]string{})
if err != nil {
return []string{}, err
}
for _, version := range nextVersions {
packagesNextVersions = append(packagesNextVersions, version.Node)
}
}
return packagesNextVersions, nil
}
// getVulnerabilitiesFromLayerPackagesNodes returns the list of vulnerabilities
// affecting the provided package nodes, filtered by Priority.
func getVulnerabilitiesFromLayerPackagesNodes(packagesNodes []string, minimumPriority types.Priority, selectedFields []string) ([]*database.Vulnerability, error) {
if len(packagesNodes) == 0 {
return []*database.Vulnerability{}, nil
}
// Get successors of the packages.
packagesNextVersions, err := getSuccessorsFromPackagesNodes(packagesNodes)
if err != nil {
return []*database.Vulnerability{}, err
}
if len(packagesNextVersions) == 0 {
return []*database.Vulnerability{}, nil
}
// Find vulnerabilities fixed in these successors.
vulnerabilities, err := database.FindAllVulnerabilitiesByFixedIn(packagesNextVersions, selectedFields)
if err != nil {
return []*database.Vulnerability{}, err
}
// Filter vulnerabilities depending on their priority and remove duplicates.
filteredVulnerabilities := []*database.Vulnerability{}
seen := map[string]struct{}{}
for _, v := range vulnerabilities {
if minimumPriority.Compare(v.Priority) <= 0 {
if _, alreadySeen := seen[v.ID]; !alreadySeen {
filteredVulnerabilities = append(filteredVulnerabilities, v)
seen[v.ID] = struct{}{}
}
}
}
return filteredVulnerabilities, nil
}

View File

@ -0,0 +1,247 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package logic
import (
"errors"
"net/http"
"github.com/coreos/quay-sec/api/jsonhttp"
"github.com/coreos/quay-sec/database"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/julienschmidt/httprouter"
)
// GETVulnerabilities returns a vulnerability identified by an ID if it exists.
func GETVulnerabilities(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
// Find vulnerability.
vulnerability, err := database.FindOneVulnerability(p.ByName("id"), []string{database.FieldVulnerabilityID, database.FieldVulnerabilityLink, database.FieldVulnerabilityPriority, database.FieldVulnerabilityDescription, database.FieldVulnerabilityFixedIn})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
abstractVulnerability, err := vulnerability.ToAbstractVulnerability()
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
jsonhttp.Render(w, http.StatusOK, abstractVulnerability)
}
// POSTVulnerabilities manually inserts a vulnerability into the database if it
// does not exist yet.
func POSTVulnerabilities(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
var parameters *database.AbstractVulnerability
if s, err := jsonhttp.ParseBody(r, &parameters); err != nil {
jsonhttp.RenderError(w, s, err)
return
}
// Ensure that the vulnerability does not exist.
vulnerability, err := database.FindOneVulnerability(parameters.ID, []string{})
if err != nil && err != cerrors.ErrNotFound {
jsonhttp.RenderError(w, 0, err)
return
}
if vulnerability != nil {
jsonhttp.RenderError(w, 0, cerrors.NewBadRequestError("vulnerability already exists"))
return
}
// Insert packages.
packages := database.AbstractPackagesToPackages(parameters.AffectedPackages)
err = database.InsertPackages(packages)
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
var pkgNodes []string
for _, p := range packages {
pkgNodes = append(pkgNodes, p.Node)
}
// Insert vulnerability.
notifications, err := database.InsertVulnerabilities([]*database.Vulnerability{parameters.ToVulnerability(pkgNodes)})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Insert notifications.
err = database.InsertNotifications(notifications, database.GetDefaultNotificationWrapper())
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
jsonhttp.Render(w, http.StatusCreated, nil)
}
// PUTVulnerabilities updates a vulnerability if it exists.
func PUTVulnerabilities(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
var parameters *database.AbstractVulnerability
if s, err := jsonhttp.ParseBody(r, &parameters); err != nil {
jsonhttp.RenderError(w, s, err)
return
}
parameters.ID = p.ByName("id")
// Ensure that the vulnerability exists.
_, err := database.FindOneVulnerability(parameters.ID, []string{})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Insert packages.
packages := database.AbstractPackagesToPackages(parameters.AffectedPackages)
err = database.InsertPackages(packages)
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
var pkgNodes []string
for _, p := range packages {
pkgNodes = append(pkgNodes, p.Node)
}
// Insert vulnerability.
notifications, err := database.InsertVulnerabilities([]*database.Vulnerability{parameters.ToVulnerability(pkgNodes)})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Insert notifications.
err = database.InsertNotifications(notifications, database.GetDefaultNotificationWrapper())
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
jsonhttp.Render(w, http.StatusCreated, nil)
}
// DELVulnerabilities deletes a vulnerability if it exists.
func DELVulnerabilities(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
err := database.DeleteVulnerability(p.ByName("id"))
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
jsonhttp.Render(w, http.StatusNoContent, nil)
}
// GETVulnerabilitiesIntroducingLayers returns the list of layers that
// introduces a given vulnerability, if it exists.
// To clarify, it does not return the list of every layers that have
// the vulnerability.
func GETVulnerabilitiesIntroducingLayers(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
// Find vulnerability to verify that it exists.
_, err := database.FindOneVulnerability(p.ByName("id"), []string{})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
layers, err := database.FindAllLayersIntroducingVulnerability(p.ByName("id"), []string{database.FieldLayerID})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
layersIDs := []string{}
for _, l := range layers {
layersIDs = append(layersIDs, l.ID)
}
jsonhttp.Render(w, http.StatusOK, struct{ IntroducingLayersIDs []string }{IntroducingLayersIDs: layersIDs})
}
// POSTVulnerabilitiesAffectedLayersParameters represents the expected
// parameters for POSTVulnerabilitiesAffectedLayers.
type POSTVulnerabilitiesAffectedLayersParameters struct {
LayersIDs []string
}
// POSTVulnerabilitiesAffectedLayers returns whether the specified layers
// (by their IDs) are vulnerable to the given Vulnerability or not.
func POSTVulnerabilitiesAffectedLayers(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
// Parse body.
var parameters POSTBatchLayersVulnerabilitiesParameters
if s, err := jsonhttp.ParseBody(r, &parameters); err != nil {
jsonhttp.RenderError(w, s, err)
return
}
if len(parameters.LayersIDs) == 0 {
jsonhttp.RenderError(w, http.StatusBadRequest, errors.New("getting the entire list of affected layers is not supported yet: at least one LayerID query parameter must be provided"))
return
}
// Find vulnerability.
vulnerability, err := database.FindOneVulnerability(p.ByName("id"), []string{database.FieldVulnerabilityFixedIn})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Save the fixed in nodes into a map for fast check.
fixedInPackagesMap := make(map[string]struct{})
for _, fixedInNode := range vulnerability.FixedInNodes {
fixedInPackagesMap[fixedInNode] = struct{}{}
}
response := make(map[string]interface{})
// For each LayerID parameter.
for _, layerID := range parameters.LayersIDs {
// Find layer
layer, err := database.FindOneLayerByID(layerID, []string{database.FieldLayerParent, database.FieldLayerPackages, database.FieldLayerPackages})
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Find layer's packages.
packagesNodes, err := layer.AllPackages()
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Get successors packages of layer' packages.
successors, err := getSuccessorsFromPackagesNodes(packagesNodes)
if err != nil {
jsonhttp.RenderError(w, 0, err)
return
}
// Determine if the layer is vulnerable by verifying if one of the successors
// of its packages are fixed by the vulnerability.
vulnerable := false
for _, p := range successors {
if _, fixed := fixedInPackagesMap[p]; fixed {
vulnerable = true
break
}
}
response[layerID] = struct{ Vulnerable bool }{Vulnerable: vulnerable}
}
jsonhttp.Render(w, http.StatusOK, response)
}

96
api/router.go Normal file
View File

@ -0,0 +1,96 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package api
import (
"net/http"
"strings"
"time"
"github.com/coreos/quay-sec/api/logic"
"github.com/coreos/quay-sec/api/wrappers"
"github.com/julienschmidt/httprouter"
)
// VersionRouter is an HTTP router that forwards requests to the appropriate
// router depending on the API version specified in the requested URI.
type VersionRouter map[string]*httprouter.Router
// NewVersionRouter instantiates a VersionRouter and every sub-routers that are
// necessary to handle supported API versions.
func NewVersionRouter(to time.Duration) *VersionRouter {
return &VersionRouter{
"/v1": NewRouterV1(to),
}
}
// ServeHTTP forwards requests to the appropriate router depending on the API
// version specified in the requested URI and remove the version information
// from the request URL.Path, without modifying the request uRequestURI.
func (vs VersionRouter) ServeHTTP(w http.ResponseWriter, r *http.Request) {
urlStr := r.URL.String()
var version string
if len(urlStr) >= 3 {
version = urlStr[:3]
}
if router, _ := vs[version]; router != nil {
// Remove the version number from the request path to let the router do its
// job but do not update the RequestURI
r.URL.Path = strings.Replace(r.URL.Path, version, "", 1)
router.ServeHTTP(w, r)
return
}
http.NotFound(w, r)
}
// NewRouterV1 creates a new router for the API (Version 1)
func NewRouterV1(to time.Duration) *httprouter.Router {
router := httprouter.New()
wrap := func(fn httprouter.Handle) httprouter.Handle {
return wrappers.Log(wrappers.TimeOut(to, fn))
}
// General
router.GET("/versions", wrap(logic.GETVersions))
router.GET("/health", wrap(logic.GETHealth))
// Layers
router.POST("/layers", wrap(logic.POSTLayers))
router.GET("/layers/:id/os", wrap(logic.GETLayersOS))
router.GET("/layers/:id/parent", wrap(logic.GETLayersParent))
router.GET("/layers/:id/packages", wrap(logic.GETLayersPackages))
router.GET("/layers/:id/packages/diff", wrap(logic.GETLayersPackagesDiff))
router.GET("/layers/:id/vulnerabilities", wrap(logic.GETLayersVulnerabilities))
router.GET("/layers/:id/vulnerabilities/diff", wrap(logic.GETLayersVulnerabilitiesDiff))
// # Batch version of "/layers/:id/vulnerabilities"
router.POST("/batch/layers/vulnerabilities", wrap(logic.POSTBatchLayersVulnerabilities))
// Vulnerabilities
router.POST("/vulnerabilities", wrap(logic.POSTVulnerabilities))
router.PUT("/vulnerabilities/:id", wrap(logic.PUTVulnerabilities))
router.GET("/vulnerabilities/:id", wrap(logic.GETVulnerabilities))
router.DELETE("/vulnerabilities/:id", wrap(logic.DELVulnerabilities))
router.GET("/vulnerabilities/:id/introducing-layers", wrap(logic.GETVulnerabilitiesIntroducingLayers))
router.POST("/vulnerabilities/:id/affected-layers", wrap(logic.POSTVulnerabilitiesAffectedLayers))
return router
}
// NewHealthRouter creates a new router that only serve the Health function on /
func NewHealthRouter() *httprouter.Router {
router := httprouter.New()
router.GET("/", logic.GETHealth)
return router
}

75
api/wrappers/log.go Normal file
View File

@ -0,0 +1,75 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package logic implements all the available API methods.
// Every methods are documented in docs/API.md.
// Package wrappers contains httprouter.Handle wrappers that are used in the API.
package wrappers
import (
"net/http"
"time"
"github.com/coreos/pkg/capnslog"
"github.com/julienschmidt/httprouter"
)
var log = capnslog.NewPackageLogger("github.com/coreos/quay-sec", "api")
type logWriter struct {
http.ResponseWriter
status int
size int
}
func (lw *logWriter) Header() http.Header {
return lw.ResponseWriter.Header()
}
func (lw *logWriter) Write(b []byte) (int, error) {
if !lw.Written() {
lw.WriteHeader(http.StatusOK)
}
size, err := lw.ResponseWriter.Write(b)
lw.size += size
return size, err
}
func (lw *logWriter) WriteHeader(s int) {
lw.status = s
lw.ResponseWriter.WriteHeader(s)
}
func (lw *logWriter) Size() int {
return lw.size
}
func (lw *logWriter) Written() bool {
return lw.status != 0
}
func (lw *logWriter) Status() int {
return lw.status
}
// Log wraps a http.HandlerFunc and logs the API call
func Log(fn httprouter.Handle) httprouter.Handle {
return func(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
lw := &logWriter{ResponseWriter: w}
start := time.Now()
fn(lw, r, p)
log.Infof("%d %s %s (%s)", lw.Status(), r.Method, r.RequestURI, time.Since(start))
}
}

105
api/wrappers/timeout.go Normal file
View File

@ -0,0 +1,105 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package logic implements all the available API methods.
// Every methods are documented in docs/API.md.
package wrappers
import (
"errors"
"fmt"
"net/http"
"sync"
"time"
"github.com/coreos/quay-sec/api/jsonhttp"
"github.com/julienschmidt/httprouter"
)
// ErrHandlerTimeout is returned on ResponseWriter Write calls
// in handlers which have timed out.
var ErrHandlerTimeout = errors.New("http: Handler timeout")
type timeoutWriter struct {
http.ResponseWriter
mu sync.Mutex
timedOut bool
wroteHeader bool
}
func (tw *timeoutWriter) Header() http.Header {
return tw.ResponseWriter.Header()
}
func (tw *timeoutWriter) Write(p []byte) (int, error) {
tw.mu.Lock()
defer tw.mu.Unlock()
tw.wroteHeader = true // implicitly at least
if tw.timedOut {
return 0, ErrHandlerTimeout
}
return tw.ResponseWriter.Write(p)
}
func (tw *timeoutWriter) WriteHeader(status int) {
tw.mu.Lock()
defer tw.mu.Unlock()
if tw.timedOut || tw.wroteHeader {
return
}
tw.wroteHeader = true
tw.ResponseWriter.WriteHeader(status)
}
// TimeOut wraps a http.HandlerFunc and ensure that a response is given under
// the specified duration.
//
// If the handler takes longer than the time limit, the wrapper responds with
// a Service Unavailable error, an error message and the handler response which
// may come later is ignored.
//
// After a timeout, any write the handler to its ResponseWriter will return
// ErrHandlerTimeout.
//
// If the duration is 0, the wrapper does nothing.
func TimeOut(d time.Duration, fn httprouter.Handle) httprouter.Handle {
if d == 0 {
fmt.Println("nope timeout")
return fn
}
return func(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
done := make(chan bool)
tw := &timeoutWriter{ResponseWriter: w}
go func() {
fn(tw, r, p)
done <- true
}()
select {
case <-done:
return
case <-time.After(d):
tw.mu.Lock()
defer tw.mu.Unlock()
if !tw.wroteHeader {
jsonhttp.RenderError(tw.ResponseWriter, http.StatusServiceUnavailable, ErrHandlerTimeout)
}
tw.timedOut = true
}
}
}

150
benchmark_test.go Normal file
View File

@ -0,0 +1,150 @@
package main
import (
"database/sql"
"math"
"strconv"
"testing"
"github.com/barakmich/glog"
"github.com/coreos/quay-sec/database"
"github.com/coreos/quay-sec/utils/types"
)
var vulnerabilities []*database.Vulnerability
var dbInfo = "host=192.168.99.100 port=5432 user=postgres sslmode=disable dbname=postgres"
func reset() {
db, err := sql.Open("postgres", dbInfo)
if err != nil {
panic(err)
}
db.Exec("DROP TABLE quads;")
db.Close()
}
func getVulnerabilities(id string) []*database.Vulnerability {
layer, err := database.FindOneLayerByID(id, []string{database.FieldLayerParent}, []string{database.FieldLayerContentInstalledPackages, database.FieldLayerContentRemovedPackages})
if err != nil {
panic(err)
}
packagesNodes, err := layer.AllPackages()
if err != nil {
panic(err)
}
vulnerabilities, err := database.GetVulnerabilitiesFromLayerPackagesNodes(packagesNodes, types.Negligible, []string{database.FieldVulnerabilityID, database.FieldVulnerabilityLink, database.FieldVulnerabilityPriority, database.FieldVulnerabilityDescription})
if err != nil {
panic(err)
}
return vulnerabilities
}
func generateLayersData(sublayersCount, packagesCount, packagesPerBranchesCount int) string {
var startPackages []string
var allPackages []*database.Package
for i := 0; i < packagesCount; i++ {
for j := 0; j < packagesPerBranchesCount; j++ {
p := &database.Package{
OS: "testOS",
Name: "p" + strconv.Itoa(i),
Version: types.NewVersionUnsafe(strconv.Itoa(j)),
}
allPackages = append(allPackages, p)
if j == 0 {
startPackages = append(startPackages, p.GetNode())
}
}
}
err := database.InsertPackages(allPackages)
if err != nil {
panic(err)
}
var allLayers []*database.Layer
var packagesCursor int
for i := 0; i < sublayersCount; i++ {
parentNode := ""
if i > 0 {
parentNode = allLayers[i-1].GetNode()
}
var installedPackagesNodes []string
if i == sublayersCount-1 {
if packagesCursor <= packagesCount-1 {
installedPackagesNodes = startPackages[packagesCursor:packagesCount]
}
} else if (packagesCount / sublayersCount) > 0 {
upperPackageCursor := int(math.Min(float64(packagesCursor+(packagesCount/sublayersCount)), float64(packagesCount)))
installedPackagesNodes = startPackages[packagesCursor:upperPackageCursor]
packagesCursor = upperPackageCursor
}
layer := &database.Layer{
ID: "l" + strconv.Itoa(i),
ParentNode: parentNode,
Content: database.LayerContent{
TarSum: "lc" + strconv.Itoa(i),
OS: "testOS",
InstalledPackagesNodes: installedPackagesNodes,
},
}
err := database.InsertLayer(layer)
if err != nil {
panic(err)
}
allLayers = append(allLayers, layer)
}
return allLayers[sublayersCount-1].ID
}
func benchmarkVulnerabilities(b *testing.B, sublayersCount, packagesCount, packagesPerBranchesCount int) {
glog.SetVerbosity(0)
glog.SetAlsoToStderr(false)
glog.SetStderrThreshold("FATAL")
reset()
err := database.Open("sql", dbInfo)
if err != nil {
panic(err)
}
defer database.Close()
defer reset()
layerID := generateLayersData(sublayersCount, packagesCount, packagesPerBranchesCount)
var v []*database.Vulnerability
for n := 0; n < b.N; n++ {
// store result to prevent the compiler eliminating the function call.
v = getVulnerabilities(layerID)
}
// store result to prevent the compiler eliminating the Benchmark itself.
vulnerabilities = v
}
func BenchmarkVulnerabilitiesL1P1PPB1(b *testing.B) { benchmarkVulnerabilities(b, 1, 1, 1) }
func BenchmarkVulnerabilitiesL1P1PPB5(b *testing.B) { benchmarkVulnerabilities(b, 1, 1, 5) }
func BenchmarkVulnerabilitiesL1P1PPB10(b *testing.B) { benchmarkVulnerabilities(b, 1, 1, 10) }
func BenchmarkVulnerabilitiesL1P1PPB20(b *testing.B) { benchmarkVulnerabilities(b, 1, 1, 20) }
func BenchmarkVulnerabilitiesL1P1PPB50(b *testing.B) { benchmarkVulnerabilities(b, 1, 1, 50) }
func BenchmarkVulnerabilitiesL1P5PPB1(b *testing.B) { benchmarkVulnerabilities(b, 1, 5, 1) }
func BenchmarkVulnerabilitiesL1P10PPB1(b *testing.B) { benchmarkVulnerabilities(b, 1, 10, 1) }
func BenchmarkVulnerabilitiesL1P20PPB1(b *testing.B) { benchmarkVulnerabilities(b, 1, 20, 1) }
func BenchmarkVulnerabilitiesL1P50PPB1(b *testing.B) { benchmarkVulnerabilities(b, 1, 50, 1) }
func BenchmarkVulnerabilitiesL5P1PPB1(b *testing.B) { benchmarkVulnerabilities(b, 5, 1, 1) }
func BenchmarkVulnerabilitiesL10P1PPB1(b *testing.B) { benchmarkVulnerabilities(b, 10, 1, 1) }
func BenchmarkVulnerabilitiesL20P1PPB1(b *testing.B) { benchmarkVulnerabilities(b, 20, 1, 1) }
func BenchmarkVulnerabilitiesL50P1PPB1(b *testing.B) { benchmarkVulnerabilities(b, 50, 1, 1) }
func BenchmarkVulnerabilitiesL5P5PPB5(b *testing.B) { benchmarkVulnerabilities(b, 5, 5, 5) }
func BenchmarkVulnerabilitiesL10P10PPB10(b *testing.B) { benchmarkVulnerabilities(b, 10, 10, 10) }
func BenchmarkVulnerabilitiesL20P20PPB20(b *testing.B) { benchmarkVulnerabilities(b, 20, 20, 20) }
func BenchmarkVulnerabilitiesL50P50PPB50(b *testing.B) { benchmarkVulnerabilities(b, 50, 50, 50) }

1
cloudformation/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
.venv

View File

@ -0,0 +1,154 @@
# Copyright 2015 CoreOS, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http:# www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import re
import logging
import json
import yaml
import sys
import hashlib
import boto.cloudformation as cloudformation
import boto.s3 as s3
from boto.s3.key import Key
from jinja2 import FileSystemLoader, Environment, StrictUndefined
from container_cloud_config import CloudConfigContext
logger = logging.getLogger(__name__)
def userdata(value, json_indent=2):
encoded = {
"Fn::Base64": {
"Fn::Join": ["", [line + '\n' for line in value.split('\n')]]
}
}
return json.dumps(encoded, indent=json_indent)
def bootstrap_user_data(user_data, expiration_seconds=3600):
uploaded = upload_s3_unique(user_data)
signed_url = uploaded.generate_url(expires_in=expiration_seconds)
template = ENV.get_template('bootstrap_cloudconfig.yaml')
return template.render(cloudconfig_url=signed_url)
ENV = Environment(loader=FileSystemLoader('templates'), undefined=StrictUndefined, extensions=['jinja2.ext.do'])
ENV.filters['userdata'] = userdata
ENV.filters['bootstrap_user_data'] = bootstrap_user_data
CONFIG_CONTEXT = CloudConfigContext()
CONFIG_CONTEXT.populate_jinja_environment(ENV)
ARGUMENT = re.compile(r'(-[\w])|(--[\w]+)')
def parse_args():
desc = 'Generate the cloud config for all nodes in the cluster.'
parser = argparse.ArgumentParser(description=desc)
parser.add_argument('template', help='Template file to use when creating stack')
parser.add_argument('region', help='AWS Region',)
parser.add_argument('cfbucket', help='AWS CloudFormation Bucket')
parser.add_argument('accesskey', help='AWS Access Key ID')
parser.add_argument('secretkey', help='AWS Secret Access Key')
parser.add_argument('--json', dest='json', help='Output json config (default).',
action='store_true')
parser.add_argument('--yaml', dest='json', help='Output yaml config.', action='store_false')
parser.add_argument('--upload', dest='stackname',
help='Upload the stack to cloud formation with the given name.')
parser.set_defaults(json=True)
logger.debug('Parsing all args')
_, unknown = parser.parse_known_args()
logger.debug('Unknown args: %s', unknown)
added_args = set()
while (len(unknown) > 0 and ARGUMENT.match(unknown[0]) and
ARGUMENT.match(unknown[0]).end() == len(unknown[0])):
logger.debug('Adding argument: %s', unknown[0])
added_args.add(unknown[0].lstrip('-'))
parser.add_argument(unknown[0])
_, unknown = parser.parse_known_args()
logger.debug('Parsing final set of args')
return parser.parse_args(), added_args
def upload_s3_unique(region, cfbucket, credentials, file_contents):
logger.debug('Checking for file in s3')
json_stack_filename = hashlib.sha1(file_contents).hexdigest()
ess_three = s3.connect_to_region(region, **credentials)
bucket = ess_three.get_bucket(cfbucket, validate=False)
template_key = bucket.get_key(json_stack_filename)
if template_key is None:
logger.debug('Uploading file to s3')
template_key = Key(bucket)
template_key.key = json_stack_filename
template_key.set_contents_from_string(file_contents)
return template_key
def upload(region, cfbucket, credentials, stack_name, json_stack_def):
template_key = upload_s3_unique(region, cfbucket, credentials, json_stack_def)
template_url = template_key.generate_url(expires_in=0, query_auth=False)
logger.debug('Template available in s3 at url: %s', template_url)
logger.debug('Uploading stack definition with name: %s', stack_name)
cf = cloudformation.connect_to_region(region, **credentials)
cf.create_stack(stack_name, capabilities=['CAPABILITY_IAM'], template_url=template_url)
logger.debug('Done uploading stack definition')
def main():
logging.basicConfig(level=logging.DEBUG)
all_args, added_args = parse_args()
template_kwargs = {added: getattr(all_args, added, None) for added in added_args}
credentials = {
'aws_access_key_id': all_args.accesskey,
'aws_secret_access_key': all_args.secretkey,
}
logger.debug('Rendering yaml template')
template = ENV.get_template(all_args.template)
yaml_stack_def = template.render(**template_kwargs)
logger.debug('Validating yaml')
parsed = yaml.load(yaml_stack_def)
if not all_args.json and all_args.stackname:
logger.error('YAML cannot be uploaded directly to cloud formation, please use json')
sys.exit(1)
if all_args.json:
logger.debug('Rendering json')
if all_args.stackname:
json_stack_def = json.dumps(parsed)
CONFIG_CONTEXT.prime_flattened_image_cache()
upload(all_args.region, all_args.cfbucket, credentials, all_args.stackname, json_stack_def)
else:
print json.dumps(parsed, indent=2)
else:
print yaml_stack_def
if __name__ == '__main__':
main()

View File

@ -0,0 +1,5 @@
jinja2
requests
pyyaml
boto
git+https://github.com/DevTable/container-cloud-config.git

View File

@ -0,0 +1,155 @@
{% macro nodedata() -%}
#cloud-config
ssh_authorized_keys:
{{ ssh_public_keys() }}
write_files:
- path: /etc/certs/quay-sec.crt
permissions: '0600'
content: |
{{ app_public_key()|indent(4) }}
- path: /etc/certs/quay-sec.key
permissions: '0600'
content: |
{{ app_private_key()|indent(4) }}
- path: /etc/certs/ca.crt
permissions: '0600'
content: |
{{ app_ca()|indent(4) }}
- path: /etc/sysctl.d/50-somaxconn.conf
content: net.core.somaxconn = 16384
coreos:
update:
reboot-strategy: off
group: stable
units:
- name: systemd-sysctl.service
command: restart
{% set after = [] %}
{% block logentries scoped -%}
{% if logentries_token %}
{% do after.append('docker-logentries.service') %}
- name: docker-logentries.service
command: start
content: |
[Unit]
Description=Forward Docker container's log to LogEntries
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
ExecStartPre=/usr/bin/docker pull logentries/docker-logentries
ExecStart=/bin/bash -c "/usr/bin/docker run --privileged -v /var/run/docker.sock:/var/run/docker.sock logentries/docker-logentries --no-stats --no-dockerEvents -t {{ logentries_token }} -a host=`uname -n`"
{% endif %}
{%- endblock %}
{% block app_container scoped -%}
{{ dockersystemd('quay-sec',
'quay.io/coreos/quay-sec',
'coreos+quaysec',
'AQMFTPHH5XMZAE0IRLJSO0K6SL9OP2896ENGY22PJLVUW9TTPDX5KOPE31DAQM23',
image_tag,
extra_args='-p 6060:6060 -p 6061:6061 -v /etc/certs:/etc/certs:ro',
command=app_arguments,
flattened=True,
after_units=after,
)|indent(4) }}
{%- endblock %}
{%- endmacro %}
AWSTemplateFormatVersion: '2010-09-09'
Description: Quay-sec on EC2 behind an ELB
Resources:
AppServerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Quay-sec App Server SecurityGroup
SecurityGroupIngress:
- CidrIp: 0.0.0.0/0
FromPort: '22'
IpProtocol: tcp
ToPort: '22'
- FromPort: '6060'
ToPort: '6060'
IpProtocol: tcp
SourceSecurityGroupOwnerId: 'amazon-elb'
SourceSecurityGroupName: 'amazon-elb-sg'
- FromPort: '6061'
ToPort: '6061'
IpProtocol: tcp
SourceSecurityGroupOwnerId: 'amazon-elb'
SourceSecurityGroupName: 'amazon-elb-sg'
AppServerLaunchConfig:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: {{ coreos_ami|default(load_coreos_ami('beta')) }}
InstanceType: m3.medium
KeyName: {{ ssh_key_name }}
SecurityGroups:
- {Ref: AppServerSecurityGroup}
UserData: {{ nodedata()|userdata|indent(6) }}
AppServerAutoScale:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones:
Fn::GetAZs: ''
LaunchConfigurationName: {Ref: AppServerLaunchConfig}
{% block asg_parameters -%}
DesiredCapacity: '3'
MaxSize: '10'
MinSize: '3'
HealthCheckType: ELB
{%- endblock %}
HealthCheckGracePeriod: 600
LoadBalancerNames:
{{ elb_names()|indent(4) }}
Tags:
- Key: Name
PropagateAtLaunch: true
Value: {Ref: 'AWS::StackName'}
ScaleUp:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName: {Ref: AppServerAutoScale}
ScalingAdjustment: '1'
Cooldown: '600'
ScaleDown:
Type: AWS::AutoScaling::ScalingPolicy
Properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName: {Ref: AppServerAutoScale}
ScalingAdjustment: '-1'
Cooldown: '600'
ScaleUpAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: '2'
Statistic: Average
Threshold: '60'
AlarmDescription: Alarm if CPU too high or metric disappears indicating instance is down
Period: '60'
AlarmActions:
- {Ref: ScaleUp}
Namespace: AWS/EC2
Dimensions:
- Name: AutoScalingGroupName
Value: { Ref: AppServerAutoScale }
ComparisonOperator: GreaterThanThreshold
MetricName: CPUUtilization
ScaleDownAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: '3'
Statistic: Average
Threshold: '30'
AlarmDescription: Alarm if CPU too low
Period: '60'
AlarmActions:
- {Ref: ScaleDown}
Namespace: AWS/EC2
Dimensions:
- Name: AutoScalingGroupName
Value: { Ref: AppServerAutoScale }
ComparisonOperator: LessThanThreshold
MetricName: CPUUtilization

View File

@ -0,0 +1,45 @@
AWSTemplateFormatVersion: '2010-09-09'
Description: HTTPS ELB for Quay-sec
Resources:
QuaySecLoadBalancer:
Type: AWS::ElasticLoadBalancing::LoadBalancer
Properties:
CrossZone: true
AvailabilityZones:
Fn::GetAZs: ''
Listeners:
- LoadBalancerPort: 6060
InstancePort: 6060
Protocol: TCP
- LoadBalancerPort: 6061
InstancePort: 6061
Protocol: TCP
HealthCheck:
Target: HTTP:6061/
HealthyThreshold: '2'
UnhealthyThreshold: '3'
Interval: '60'
Timeout: '30'
ConnectionSettings:
IdleTimeout: 3600
ConnectionDrainingPolicy:
Enabled: true
Timeout: 2000
ELBHealthyHostsAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
EvaluationPeriods: '10'
Statistic: Minimum
Threshold: '2'
AlarmDescription: Alarm if the health host count falls below 2
Period: '60'
AlarmActions:
{{ alarm_actions()|indent(6) }}
InsufficientDataActions:
{{ alarm_actions()|indent(6) }}
Namespace: AWS/ELB
Dimensions:
- Name: LoadBalancerName
Value: {Ref: QuaySecLoadBalancer}
ComparisonOperator: LessThanThreshold
MetricName: HealthyHostCount

182
database/database.go Normal file
View File

@ -0,0 +1,182 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package database implements every database models and the functions that
// manipulate them.
package database
import (
"errors"
"os"
"github.com/barakmich/glog"
"github.com/coreos/pkg/capnslog"
"github.com/coreos/quay-sec/health"
"github.com/coreos/quay-sec/utils"
"github.com/google/cayley"
"github.com/google/cayley/graph"
"github.com/google/cayley/graph/path"
// Load all supported backends.
_ "github.com/google/cayley/graph/bolt"
_ "github.com/google/cayley/graph/leveldb"
_ "github.com/google/cayley/graph/memstore"
_ "github.com/google/cayley/graph/mongo"
_ "github.com/google/cayley/graph/sql"
)
const (
// FieldIs is the graph predicate defining the type of an entity.
FieldIs = "is"
)
var (
log = capnslog.NewPackageLogger("github.com/coreos/quay-sec", "database")
// ErrTransaction is an error that occurs when a database transaction fails.
ErrTransaction = errors.New("database: transaction failed (concurrent modification?)")
// ErrBackendException is an error that occurs when the database backend does
// not work properly (ie. unreachable).
ErrBackendException = errors.New("database: could not query backend")
// ErrInconsistent is an error that occurs when a database consistency check
// fails (ie. when an entity which is supposed to be unique is detected twice)
ErrInconsistent = errors.New("database: inconsistent database")
// ErrCantOpen is an error that occurs when the database could not be opened
ErrCantOpen = errors.New("database: could not open database")
store *cayley.Handle
)
func init() {
health.RegisterHealthchecker("database", Healthcheck)
}
// Open opens a Cayley database, creating it if necessary and return its handle
func Open(dbType, dbPath string) error {
if store != nil {
log.Errorf("could not open database at %s : a database is already opened", dbPath)
return ErrCantOpen
}
var err error
// Try to create database if necessary
if dbType == "bolt" || dbType == "leveldb" {
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
// No, initialize it if possible
log.Infof("database at %s does not exist yet, creating it", dbPath)
if err = graph.InitQuadStore(dbType, dbPath, nil); err != nil {
log.Errorf("could not create database at %s : %s", dbPath, err)
return ErrCantOpen
}
}
} else if dbType == "sql" {
graph.InitQuadStore(dbType, dbPath, nil)
}
store, err = cayley.NewGraph(dbType, dbPath, nil)
if err != nil {
log.Errorf("could not open database at %s : %s", dbPath, err)
return ErrCantOpen
}
return nil
}
// Close closes a Cayley database
func Close() {
if store != nil {
store.Close()
store = nil
}
}
// Healthcheck simply adds and then remove a quad in Cayley to ensure it is working
// It returns true when everything is ok
func Healthcheck() health.Status {
var err error
if store != nil {
t := cayley.NewTransaction()
q := cayley.Quad("cayley", "is", "healthy", "")
t.AddQuad(q)
t.RemoveQuad(q)
glog.SetStderrThreshold("FATAL") // TODO REMOVE ME
err = store.ApplyTransaction(t)
glog.SetStderrThreshold("ERROR") // TODO REMOVE ME
}
return health.Status{IsEssential: true, IsHealthy: err == nil, Details: nil}
}
// toValue returns a single value from a path
// If the path does not lead to a value, an empty string is returned
// If the path leads to multiple values or if a database error occurs, an empty string and an error are returned
func toValue(p *path.Path) (string, error) {
var value string
it, _ := p.BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
if value != "" {
log.Error("failed query in toValue: used on an iterator containing multiple values")
return "", ErrInconsistent
}
if it.Result() != nil {
value = store.NameOf(it.Result())
}
}
if it.Err() != nil {
log.Errorf("failed query in toValue: %s", it.Err())
return "", ErrBackendException
}
return value, nil
}
// toValues returns multiple values from a path
// If the path does not lead to any value, an empty array is returned
// If a database error occurs, an empty array and an error are returned
func toValues(p *path.Path) ([]string, error) {
var values []string
it, _ := p.BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
if it.Result() != nil {
value := store.NameOf(it.Result())
if value != "" {
values = append(values, value)
}
}
}
if it.Err() != nil {
log.Errorf("failed query in toValues: %s", it.Err())
return []string{}, ErrBackendException
}
return values, nil
}
// saveFields appends cayley's Save method to a path for each field in
// selectedFields, except the ones that appears also in exceptFields
func saveFields(p *path.Path, selectedFields []string, exceptFields []string) {
for _, selectedField := range selectedFields {
if utils.Contains(selectedField, exceptFields) {
continue
}
p = p.Save(selectedField, selectedField)
}
}

81
database/database_test.go Normal file
View File

@ -0,0 +1,81 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"testing"
"github.com/google/cayley"
"github.com/stretchr/testify/assert"
)
func TestHealthcheck(t *testing.T) {
Open("memstore", "")
defer Close()
b := Healthcheck()
assert.True(t, b.IsHealthy, "Healthcheck failed")
}
func TestToValue(t *testing.T) {
Open("memstore", "")
defer Close()
// toValue()
v, err := toValue(cayley.StartPath(store, "tests").Out("are"))
assert.Nil(t, err, "toValue should work even if the requested path leads to nothing")
assert.Equal(t, "", v, "toValue should return an empty string if the requested path leads to nothing")
store.AddQuad(cayley.Quad("tests", "are", "awesome", ""))
v, err = toValue(cayley.StartPath(store, "tests").Out("are"))
assert.Nil(t, err, "toValue should have worked")
assert.Equal(t, "awesome", v, "toValue did not return the expected value")
store.AddQuad(cayley.Quad("tests", "are", "running", ""))
v, err = toValue(cayley.StartPath(store, "tests").Out("are"))
assert.NotNil(t, err, "toValue should return an error and an empty string if the path leads to multiple values")
assert.Equal(t, "", v, "toValue should return an error and an empty string if the path leads to multiple values")
// toValues()
vs, err := toValues(cayley.StartPath(store, "CoreOS").Out(FieldIs))
assert.Nil(t, err, "toValues should work even if the requested path leads to nothing")
assert.Len(t, vs, 0, "toValue should return an empty array if the requested path leads to nothing")
words := []string{"powerful", "lightweight"}
for i, word := range words {
store.AddQuad(cayley.Quad("CoreOS", FieldIs, word, ""))
v, err := toValues(cayley.StartPath(store, "CoreOS").Out(FieldIs))
assert.Nil(t, err, "toValues should have worked")
assert.Len(t, v, i+1, "toValues did not return the right amount of values")
for _, e := range words[:i+1] {
assert.Contains(t, v, e, "toValues did not return the values we expected")
}
}
// toValue(s)() and empty values
store.AddQuad(cayley.Quad("bob", "likes", "", ""))
v, err = toValue(cayley.StartPath(store, "bob").Out("likes"))
assert.Nil(t, err, "toValue should work even if the requested path leads to nothing")
assert.Equal(t, "", v, "toValue should return an empty string if the requested path leads to nothing")
store.AddQuad(cayley.Quad("bob", "likes", "running", ""))
v, err = toValue(cayley.StartPath(store, "bob").Out("likes"))
assert.Nil(t, err, "toValue should have worked")
assert.Equal(t, "running", v, "toValue did not return the expected value")
store.AddQuad(cayley.Quad("bob", "likes", "swimming", ""))
va, err := toValues(cayley.StartPath(store, "bob").Out("likes"))
assert.Nil(t, err, "toValues should have worked")
assert.Len(t, va, 2, "toValues should have returned 2 values")
}

58
database/flag.go Normal file
View File

@ -0,0 +1,58 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/google/cayley"
)
// UpdateFlag creates a flag or update an existing flag's value
func UpdateFlag(name, value string) error {
if name == "" || value == "" {
log.Warning("could not insert a flag which has an empty name or value")
return cerrors.NewBadRequestError("could not insert a flag which has an empty name or value")
}
// Initialize transaction
t := cayley.NewTransaction()
// Get current flag value
currentValue, err := GetFlagValue(name)
if err != nil {
return err
}
// Build transaction
name = "flag:" + name
if currentValue != "" {
t.RemoveQuad(cayley.Quad(name, "value", currentValue, ""))
}
t.AddQuad(cayley.Quad(name, "value", value, ""))
// Apply transaction
if err = store.ApplyTransaction(t); err != nil {
log.Errorf("failed transaction (UpdateFlag): %s", err)
return ErrTransaction
}
// Return
return nil
}
// GetFlagValue returns the value of the flag given by its name (or an empty string if the flag does not exist)
func GetFlagValue(name string) (string, error) {
return toValue(cayley.StartPath(store, "flag:"+name).Out("value"))
}

48
database/flag_test.go Normal file
View File

@ -0,0 +1,48 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestFlag(t *testing.T) {
Open("memstore", "")
defer Close()
// Get non existing flag
f, err := GetFlagValue("test")
assert.Nil(t, err, "GetFlagValue should have worked")
assert.Empty(t, "", f, "Getting a non-existing flag should return an empty string")
// Try to insert invalid flags
assert.Error(t, UpdateFlag("test", ""), "It should not accept a flag with an empty name or value")
assert.Error(t, UpdateFlag("", "test"), "It should not accept a flag with an empty name or value")
assert.Error(t, UpdateFlag("", ""), "It should not accept a flag with an empty name or value")
// Insert a flag and verify its value
assert.Nil(t, UpdateFlag("test", "test1"))
f, err = GetFlagValue("test")
assert.Nil(t, err, "GetFlagValue should have worked")
assert.Equal(t, "test1", f, "GetFlagValue did not return the expected value")
// Update a flag and verify its value
assert.Nil(t, UpdateFlag("test", "test2"))
f, err = GetFlagValue("test")
assert.Nil(t, err, "GetFlagValue should have worked")
assert.Equal(t, "test2", f, "GetFlagValue did not return the expected value")
}

377
database/layer.go Normal file
View File

@ -0,0 +1,377 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"strconv"
"github.com/coreos/quay-sec/utils"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/google/cayley"
"github.com/google/cayley/graph"
"github.com/google/cayley/graph/path"
)
const (
FieldLayerIsValue = "layer"
FieldLayerID = "id"
FieldLayerParent = "parent"
FieldLayerSuccessors = "successors"
FieldLayerOS = "os"
FieldLayerInstalledPackages = "adds"
FieldLayerRemovedPackages = "removes"
FieldLayerEngineVersion = "engineVersion"
FieldLayerPackages = "adds/removes"
)
var FieldLayerAll = []string{FieldLayerID, FieldLayerParent, FieldLayerSuccessors, FieldLayerOS, FieldLayerPackages, FieldLayerEngineVersion}
// Layer represents an unique container layer
type Layer struct {
Node string `json:"-"`
ID string
ParentNode string `json:"-"`
SuccessorsNodes []string `json:"-"`
OS string
InstalledPackagesNodes []string `json:"-"`
RemovedPackagesNodes []string `json:"-"`
EngineVersion int
}
// GetNode returns the node name of a Layer
// Requires the key field: ID
func (l *Layer) GetNode() string {
return FieldLayerIsValue + ":" + utils.Hash(l.ID)
}
// InsertLayer insert a single layer in the database
//
// ID, and EngineVersion fields are required.
// ParentNode, OS, InstalledPackagesNodes and RemovedPackagesNodes are optional,
// SuccessorsNodes is unnecessary.
//
// The ID MUST be unique for two different layers.
//
//
// If the Layer already exists, nothing is done, except if the provided engine
// version is higher than the existing one, in which case, the OS,
// InstalledPackagesNodes and RemovedPackagesNodes fields will be replaced.
//
// The layer should only contains the newly installed/removed packages
// There is no safeguard that prevents from marking a package as newly installed
// while it has already been installed in one of its parent.
func InsertLayer(layer *Layer) error {
// Verify parameters
if layer.ID == "" {
log.Warning("could not insert a layer which has an empty ID")
return cerrors.NewBadRequestError("could not insert a layer which has an empty ID")
}
// Create required data structures
t := cayley.NewTransaction()
layer.Node = layer.GetNode()
// Try to find an existing layer
existingLayer, err := FindOneLayerByNode(layer.Node, FieldLayerAll)
if err != nil && err != cerrors.ErrNotFound {
return err
}
if existingLayer != nil && existingLayer.EngineVersion >= layer.EngineVersion {
// The layer exists and has an equal or higher engine verison, do nothing
return nil
}
if existingLayer == nil {
// Create case: add permanent nodes
t.AddQuad(cayley.Quad(layer.Node, FieldIs, FieldLayerIsValue, ""))
t.AddQuad(cayley.Quad(layer.Node, FieldLayerID, layer.ID, ""))
t.AddQuad(cayley.Quad(layer.Node, FieldLayerParent, layer.ParentNode, ""))
} else {
// Update case: remove everything before we add updated data
t.RemoveQuad(cayley.Quad(layer.Node, FieldLayerOS, existingLayer.OS, ""))
for _, pkg := range existingLayer.InstalledPackagesNodes {
t.RemoveQuad(cayley.Quad(layer.Node, FieldLayerInstalledPackages, pkg, ""))
}
for _, pkg := range existingLayer.RemovedPackagesNodes {
t.RemoveQuad(cayley.Quad(layer.Node, FieldLayerRemovedPackages, pkg, ""))
}
t.RemoveQuad(cayley.Quad(layer.Node, FieldLayerEngineVersion, strconv.Itoa(existingLayer.EngineVersion), ""))
}
// Add OS/Packages
t.AddQuad(cayley.Quad(layer.Node, FieldLayerOS, layer.OS, ""))
for _, pkg := range layer.InstalledPackagesNodes {
t.AddQuad(cayley.Quad(layer.Node, FieldLayerInstalledPackages, pkg, ""))
}
for _, pkg := range layer.RemovedPackagesNodes {
t.AddQuad(cayley.Quad(layer.Node, FieldLayerRemovedPackages, pkg, ""))
}
t.AddQuad(cayley.Quad(layer.Node, FieldLayerEngineVersion, strconv.Itoa(layer.EngineVersion), ""))
// Apply transaction
if err = store.ApplyTransaction(t); err != nil {
log.Errorf("failed transaction (InsertLayer): %s", err)
return ErrTransaction
}
return nil
}
// FindOneLayerByID finds and returns a single layer having the given ID,
// selecting the specified fields and hardcoding its ID
func FindOneLayerByID(ID string, selectedFields []string) (*Layer, error) {
t := &Layer{ID: ID}
l, err := FindOneLayerByNode(t.GetNode(), selectedFields)
if err != nil {
return nil, err
}
l.ID = ID
return l, nil
}
// FindOneLayerByNode finds and returns a single package by its node, selecting the specified fields
func FindOneLayerByNode(node string, selectedFields []string) (*Layer, error) {
l, err := toLayers(cayley.StartPath(store, node).Has(FieldIs, FieldLayerIsValue), selectedFields)
if err != nil {
return nil, err
}
if len(l) == 1 {
return l[0], nil
}
if len(l) > 1 {
log.Errorf("found multiple layers with identical node [Node: %s]", node)
return nil, ErrInconsistent
}
return nil, cerrors.ErrNotFound
}
// FindAllLayersByAddedPackageNodes finds and returns all layers that add the
// given packages (by their nodes), selecting the specified fields
func FindAllLayersByAddedPackageNodes(nodes []string, selectedFields []string) ([]*Layer, error) {
layers, err := toLayers(cayley.StartPath(store, nodes...).In(FieldLayerInstalledPackages), selectedFields)
if err != nil {
return []*Layer{}, err
}
return layers, nil
}
// FindAllLayersByPackageNode finds and returns all layers that have the given package (by its node), selecting the specified fields
// func FindAllLayersByPackageNode(node string, only map[string]struct{}) ([]*Layer, error) {
// var layers []*Layer
//
// // We need the successors field
// if only != nil {
// only[FieldLayerSuccessors] = struct{}{}
// }
//
// // Get all the layers which remove the package
// layersNodesRemoving, err := toValues(cayley.StartPath(store, node).In(FieldLayerRemovedPackages).Has(FieldIs, FieldLayerIsValue))
// if err != nil {
// return []*Layer{}, err
// }
// layersNodesRemovingMap := make(map[string]struct{})
// for _, l := range layersNodesRemoving {
// layersNodesRemovingMap[l] = struct{}{}
// }
//
// layersToBrowse, err := toLayers(cayley.StartPath(store, node).In(FieldLayerInstalledPackages).Has(FieldIs, FieldLayerIsValue), only)
// if err != nil {
// return []*Layer{}, err
// }
// for len(layersToBrowse) > 0 {
// var newLayersToBrowse []*Layer
// for _, layerToBrowse := range layersToBrowse {
// if _, layerRemovesPackage := layersNodesRemovingMap[layerToBrowse.Node]; !layerRemovesPackage {
// layers = append(layers, layerToBrowse)
// successors, err := layerToBrowse.Successors(only)
// if err != nil {
// return []*Layer{}, err
// }
// newLayersToBrowse = append(newLayersToBrowse, successors...)
// }
// layersToBrowse = newLayersToBrowse
// }
// }
//
// return layers, nil
// }
// toLayers converts a path leading to one or multiple layers to Layer structs,
// selecting the specified fields
func toLayers(path *path.Path, selectedFields []string) ([]*Layer, error) {
var layers []*Layer
saveFields(path, selectedFields, []string{FieldLayerSuccessors, FieldLayerPackages, FieldLayerInstalledPackages, FieldLayerRemovedPackages})
it, _ := path.BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
tags := make(map[string]graph.Value)
it.TagResults(tags)
layer := Layer{Node: store.NameOf(it.Result())}
for _, selectedField := range selectedFields {
switch selectedField {
case FieldLayerID:
layer.ID = store.NameOf(tags[FieldLayerID])
case FieldLayerParent:
layer.ParentNode = store.NameOf(tags[FieldLayerParent])
case FieldLayerSuccessors:
var err error
layer.SuccessorsNodes, err = toValues(cayley.StartPath(store, layer.Node).In(FieldLayerParent))
if err != nil {
log.Errorf("could not get successors of layer %s: %s.", layer.Node, err.Error())
return nil, err
}
case FieldLayerOS:
layer.OS = store.NameOf(tags[FieldLayerOS])
case FieldLayerPackages:
var err error
it, _ := cayley.StartPath(store, layer.Node).OutWithTags([]string{"predicate"}, FieldLayerInstalledPackages, FieldLayerRemovedPackages).BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
tags := make(map[string]graph.Value)
it.TagResults(tags)
predicate := store.NameOf(tags["predicate"])
if predicate == FieldLayerInstalledPackages {
layer.InstalledPackagesNodes = append(layer.InstalledPackagesNodes, store.NameOf(it.Result()))
} else if predicate == FieldLayerRemovedPackages {
layer.RemovedPackagesNodes = append(layer.RemovedPackagesNodes, store.NameOf(it.Result()))
}
}
if it.Err() != nil {
log.Errorf("could not get installed/removed packages of layer %s: %s.", layer.Node, it.Err())
return nil, err
}
case FieldLayerEngineVersion:
layer.EngineVersion, _ = strconv.Atoi(store.NameOf(tags[FieldLayerEngineVersion]))
default:
panic("unknown selectedField")
}
}
layers = append(layers, &layer)
}
if it.Err() != nil {
log.Errorf("failed query in toLayers: %s", it.Err())
return []*Layer{}, ErrBackendException
}
return layers, nil
}
// Successors find and returns all layers that define l as their parent,
// selecting the specified fields
// It requires that FieldLayerSuccessors field has been selected on l
// func (l *Layer) Successors(selectedFields []string) ([]*Layer, error) {
// if len(l.SuccessorsNodes) == 0 {
// return []*Layer{}, nil
// }
//
// return toLayers(cayley.StartPath(store, l.SuccessorsNodes...), only)
// }
// Parent find and returns the parent layer of l, selecting the specified fields
// It requires that FieldLayerParent field has been selected on l
func (l *Layer) Parent(selectedFields []string) (*Layer, error) {
if l.ParentNode == "" {
return nil, nil
}
parent, err := toLayers(cayley.StartPath(store, l.ParentNode), selectedFields)
if err != nil {
return nil, err
}
if len(parent) == 1 {
return parent[0], nil
}
if len(parent) > 1 {
log.Errorf("found multiple layers when getting parent layer of layer %s", l.ParentNode)
return nil, ErrInconsistent
}
return nil, nil
}
// Sublayers find and returns all layers that compose l, selecting the specified
// fields
// It requires that FieldLayerParent field has been selected on l
// The base image comes first, and l is last
// func (l *Layer) Sublayers(selectedFields []string) ([]*Layer, error) {
// var sublayers []*Layer
//
// // We need the parent field
// if only != nil {
// only[FieldLayerParent] = struct{}{}
// }
//
// parent, err := l.Parent(only)
// if err != nil {
// return []*Layer{}, err
// }
// if parent != nil {
// parentSublayers, err := parent.Sublayers(only)
// if err != nil {
// return []*Layer{}, err
// }
// sublayers = append(sublayers, parentSublayers...)
// }
//
// sublayers = append(sublayers, l)
//
// return sublayers, nil
// }
// AllPackages computes the full list of packages that l has and return them as
// nodes.
// It requires that FieldLayerParent, FieldLayerContentInstalledPackages,
// FieldLayerContentRemovedPackages fields has been selected on l
func (l *Layer) AllPackages() ([]string, error) {
var allPackages []string
parent, err := l.Parent([]string{FieldLayerParent, FieldLayerPackages})
if err != nil {
return []string{}, err
}
if parent != nil {
allPackages, err = parent.AllPackages()
if err != nil {
return []string{}, err
}
}
return append(utils.CompareStringLists(allPackages, l.RemovedPackagesNodes), l.InstalledPackagesNodes...), nil
}
// OperatingSystem tries to find the Operating System of a layer using its
// parents.
// It requires that FieldLayerParent and FieldLayerOS fields has been
// selected on l
func (l *Layer) OperatingSystem() (string, error) {
if l.OS != "" {
return l.OS, nil
}
// Try from the parent
parent, err := l.Parent([]string{FieldLayerParent, FieldLayerOS})
if err != nil {
return "", err
}
if parent != nil {
return parent.OperatingSystem()
}
return "", nil
}

162
database/layer_test.go Normal file
View File

@ -0,0 +1,162 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"testing"
"github.com/coreos/quay-sec/utils"
"github.com/stretchr/testify/assert"
)
// TestInvalidLayers tries to insert invalid layers
func TestInvalidLayers(t *testing.T) {
Open("memstore", "")
defer Close()
assert.Error(t, InsertLayer(&Layer{ID: ""})) // No ID
}
// TestLayerSimple inserts a single layer and ensures it can be retrieved and
// that methods works
func TestLayerSimple(t *testing.T) {
Open("memstore", "")
defer Close()
// Insert a layer and find it back
l1 := &Layer{ID: "l1", OS: "os1", InstalledPackagesNodes: []string{"p1", "p2"}, EngineVersion: 1}
if assert.Nil(t, InsertLayer(l1)) {
fl1, err := FindOneLayerByID("l1", FieldLayerAll)
if assert.Nil(t, err) && assert.NotNil(t, fl1) {
// Saved = found
assert.True(t, layerEqual(l1, fl1), "layers are not equal, expected %v, have %s", l1, fl1)
// No parent
p, err := fl1.Parent(FieldLayerAll)
assert.Nil(t, err)
assert.Nil(t, p)
// AllPackages()
pk, err := fl1.AllPackages()
assert.Nil(t, err)
if assert.Len(t, pk, 2) {
assert.Contains(t, pk, l1.InstalledPackagesNodes[0])
assert.Contains(t, pk, l1.InstalledPackagesNodes[1])
}
// OS()
o, err := fl1.OperatingSystem()
assert.Nil(t, err)
assert.Equal(t, l1.OS, o)
}
// FindAllLayersByAddedPackageNodes
al1, err := FindAllLayersByAddedPackageNodes([]string{"p1", "p3"}, FieldLayerAll)
if assert.Nil(t, err) && assert.Len(t, al1, 1) {
assert.Equal(t, al1[0].Node, l1.Node)
}
}
}
// TestLayerTree inserts a tree of layers and ensure that the tree lgoic works
func TestLayerTree(t *testing.T) {
Open("memstore", "")
defer Close()
var layers []*Layer
layers = append(layers, &Layer{ID: "l1"})
layers = append(layers, &Layer{ID: "l2", ParentNode: layers[0].GetNode(), OS: "os2", InstalledPackagesNodes: []string{"p1", "p2"}})
layers = append(layers, &Layer{ID: "l3", ParentNode: layers[1].GetNode()}) // Repeat an empty layer archive (l1)
layers = append(layers, &Layer{ID: "l4a", ParentNode: layers[2].GetNode(), InstalledPackagesNodes: []string{"p3"}, RemovedPackagesNodes: []string{"p1", "p4"}}) // p4 does not exists and thu can't actually be removed
layers = append(layers, &Layer{ID: "l4b", ParentNode: layers[2].GetNode(), InstalledPackagesNodes: []string{}, RemovedPackagesNodes: []string{"p2", "p1"}})
var flayers []*Layer
ok := true
for _, l := range layers {
ok = ok && assert.Nil(t, InsertLayer(l))
fl, err := FindOneLayerByID(l.ID, FieldLayerAll)
ok = ok && assert.Nil(t, err)
ok = ok && assert.NotNil(t, fl)
flayers = append(flayers, fl)
}
if assert.True(t, ok) {
// Start testing
// l4a
// Parent()
fl4ap, err := flayers[3].Parent(FieldLayerAll)
assert.Nil(t, err, "l4a should has l3 as parent")
if assert.NotNil(t, fl4ap, "l4a should has l3 as parent") {
assert.Equal(t, "l3", fl4ap.ID, "l4a should has l3 as parent")
}
// OS()
fl4ao, err := flayers[3].OperatingSystem()
assert.Nil(t, err, "l4a should inherits its OS from l2")
assert.Equal(t, "os2", fl4ao, "l4a should inherits its OS from l2")
// AllPackages()
fl4apkg, err := flayers[3].AllPackages()
assert.Nil(t, err)
if assert.Len(t, fl4apkg, 2) {
assert.Contains(t, fl4apkg, "p2")
assert.Contains(t, fl4apkg, "p3")
}
// l4b
// AllPackages()
fl4bpkg, err := flayers[4].AllPackages()
assert.Nil(t, err)
assert.Len(t, fl4bpkg, 0)
}
}
func TestLayerUpdate(t *testing.T) {
Open("memstore", "")
defer Close()
l1 := &Layer{ID: "l1", OS: "os1", InstalledPackagesNodes: []string{"p1", "p2"}, RemovedPackagesNodes: []string{"p3", "p4"}, EngineVersion: 1}
if assert.Nil(t, InsertLayer(l1)) {
// Do not update layer content if the engine versions are equals
l1b := &Layer{ID: "l1", OS: "os2", InstalledPackagesNodes: []string{"p1"}, RemovedPackagesNodes: []string{""}, EngineVersion: 1}
if assert.Nil(t, InsertLayer(l1b)) {
fl1b, err := FindOneLayerByID(l1.ID, FieldLayerAll)
if assert.Nil(t, err) && assert.NotNil(t, fl1b) {
assert.True(t, layerEqual(l1, fl1b), "layer contents are not equal, expected %v, have %s", l1, fl1b)
}
}
// Update the layer content with new data and a higher engine version
l1c := &Layer{ID: "l1", OS: "os2", InstalledPackagesNodes: []string{"p1", "p5"}, RemovedPackagesNodes: []string{"p6", "p7"}, EngineVersion: 2}
if assert.Nil(t, InsertLayer(l1c)) {
fl1c, err := FindOneLayerByID(l1c.ID, FieldLayerAll)
if assert.Nil(t, err) && assert.NotNil(t, fl1c) {
assert.True(t, layerEqual(l1c, fl1c), "layer contents are not equal, expected %v, have %s", l1c, fl1c)
}
}
}
}
func layerEqual(expected, actual *Layer) bool {
eq := true
eq = eq && expected.Node == actual.Node
eq = eq && expected.ID == actual.ID
eq = eq && expected.ParentNode == actual.ParentNode
eq = eq && expected.OS == actual.OS
eq = eq && expected.EngineVersion == actual.EngineVersion
eq = eq && len(utils.CompareStringLists(actual.SuccessorsNodes, expected.SuccessorsNodes)) == 0 && len(utils.CompareStringLists(expected.SuccessorsNodes, actual.SuccessorsNodes)) == 0
eq = eq && len(utils.CompareStringLists(actual.RemovedPackagesNodes, expected.RemovedPackagesNodes)) == 0 && len(utils.CompareStringLists(expected.RemovedPackagesNodes, actual.RemovedPackagesNodes)) == 0
eq = eq && len(utils.CompareStringLists(actual.InstalledPackagesNodes, expected.InstalledPackagesNodes)) == 0 && len(utils.CompareStringLists(expected.InstalledPackagesNodes, actual.InstalledPackagesNodes)) == 0
return eq
}

137
database/lock.go Normal file
View File

@ -0,0 +1,137 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"strconv"
"time"
"github.com/barakmich/glog"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/google/cayley"
"github.com/google/cayley/graph"
"github.com/google/cayley/graph/path"
)
// Lock tries to set a temporary lock in the database.
// If a lock already exists with the given name/owner, then the lock is renewed
//
// Lock does not block, instead, it returns true and its expiration time
// is the lock has been successfully acquired or false otherwise
func Lock(name string, duration time.Duration, owner string) (bool, time.Time) {
pruneLocks()
until := time.Now().Add(duration)
untilString := strconv.FormatInt(until.Unix(), 10)
// Try to get the expiration time of a lock with the same name/owner
currentExpiration, err := toValue(cayley.StartPath(store, name).Has("locked_by", owner).Out("locked_until"))
if err == nil && currentExpiration != "" {
// Renew our lock
if currentExpiration == untilString {
return true, until
}
t := cayley.NewTransaction()
t.RemoveQuad(cayley.Quad(name, "locked_until", currentExpiration, ""))
t.AddQuad(cayley.Quad(name, "locked_until", untilString, ""))
// It is not necessary to verify if the lock is ours again in the transaction
// because if someone took it, the lock's current expiration probably changed and the transaction will fail
return store.ApplyTransaction(t) == nil, until
}
t := cayley.NewTransaction()
t.AddQuad(cayley.Quad(name, "locked", "locked", "")) // Necessary to make the transaction fails if the lock already exists (and has not been pruned)
t.AddQuad(cayley.Quad(name, "locked_until", untilString, ""))
t.AddQuad(cayley.Quad(name, "locked_by", owner, ""))
glog.SetStderrThreshold("FATAL")
success := store.ApplyTransaction(t) == nil
glog.SetStderrThreshold("ERROR")
return success, until
}
// Unlock unlocks a lock specified by its name if I own it
func Unlock(name, owner string) {
pruneLocks()
t := cayley.NewTransaction()
it, _ := cayley.StartPath(store, name).Has("locked", "locked").Has("locked_by", owner).Save("locked_until", "locked_until").BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
tags := make(map[string]graph.Value)
it.TagResults(tags)
t.RemoveQuad(cayley.Quad(name, "locked", "locked", ""))
t.RemoveQuad(cayley.Quad(name, "locked_until", store.NameOf(tags["locked_until"]), ""))
t.RemoveQuad(cayley.Quad(name, "locked_by", owner, ""))
}
store.ApplyTransaction(t)
}
// LockInfo returns the owner of a lock specified by its name and its
// expiration time
func LockInfo(name string) (string, time.Time, error) {
it, _ := cayley.StartPath(store, name).Has("locked", "locked").Save("locked_until", "locked_until").Save("locked_by", "locked_by").BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
tags := make(map[string]graph.Value)
it.TagResults(tags)
tt, _ := strconv.ParseInt(store.NameOf(tags["locked_until"]), 10, 64)
return store.NameOf(tags["locked_by"]), time.Unix(tt, 0), nil
}
if it.Err() != nil {
log.Errorf("failed query in LockInfo: %s", it.Err())
return "", time.Time{}, ErrBackendException
}
return "", time.Time{}, cerrors.ErrNotFound
}
// pruneLocks removes every expired locks from the database
func pruneLocks() {
now := time.Now()
// Delete every expired locks
tr := cayley.NewTransaction()
it, _ := cayley.StartPath(store, "locked").In("locked").Save("locked_until", "locked_until").Save("locked_by", "locked_by").BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
tags := make(map[string]graph.Value)
it.TagResults(tags)
n := store.NameOf(it.Result())
t := store.NameOf(tags["locked_until"])
o := store.NameOf(tags["locked_by"])
tt, _ := strconv.ParseInt(t, 10, 64)
if now.Unix() > tt {
log.Debugf("Lock %s owned by %s has expired.", n, o)
tr.RemoveQuad(cayley.Quad(n, "locked", "locked", ""))
tr.RemoveQuad(cayley.Quad(n, "locked_until", t, ""))
tr.RemoveQuad(cayley.Quad(n, "locked_by", o, ""))
}
}
store.ApplyTransaction(tr)
}
// getLockedNodes returns every nodes that are currently locked
func getLockedNodes() *path.Path {
return cayley.StartPath(store, "locked").In("locked")
}

56
database/lock_test.go Normal file
View File

@ -0,0 +1,56 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestLock(t *testing.T) {
Open("memstore", "")
defer Close()
var l bool
var et time.Time
// Create a first lock
l, _ = Lock("test1", time.Minute, "owner1")
assert.True(t, l)
// Try to lock the same lock with another owner
l, _ = Lock("test1", time.Minute, "owner2")
assert.False(t, l)
// Renew the lock
l, _ = Lock("test1", time.Minute, "owner1")
assert.True(t, l)
// Unlock and then relock by someone else
Unlock("test1", "owner1")
l, et = Lock("test1", time.Minute, "owner2")
assert.True(t, l)
// LockInfo
o, et2, err := LockInfo("test1")
assert.Nil(t, err)
assert.Equal(t, "owner2", o)
assert.Equal(t, et.Second(), et2.Second())
// Create a second lock which is actually already expired ...
l, _ = Lock("test2", -time.Minute, "owner1")
assert.True(t, l)
// Take over the lock
l, _ = Lock("test2", time.Minute, "owner2")
assert.True(t, l)
}

402
database/notification.go Normal file
View File

@ -0,0 +1,402 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"encoding/json"
"strconv"
"github.com/coreos/quay-sec/utils"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/coreos/quay-sec/utils/types"
"github.com/google/cayley"
"github.com/google/cayley/graph"
"github.com/pborman/uuid"
)
// maxNotifications is the number of notifications that InsertNotifications
// will accept at the same time. Above this number, notifications are ignored.
const maxNotifications = 100
// A Notification defines an interface to a message that can be sent by a
// notifier.Notifier.
// A NotificationWrapper has to be used to convert it into a NotificationWrap,
// which can be stored in the database.
type Notification interface {
// GetName returns the explicit (humanly meaningful) name of a notification.
GetName() string
// GetType returns the type of a notification, which is used by a
// NotificationWrapper to determine the concrete type of a Notification.
GetType() string
// GetContent returns the content of the notification.
GetContent() (interface{}, error)
}
// NotificationWrapper is an interface defined how to convert a Notification to
// a NotificationWrap object and vice-versa.
type NotificationWrapper interface {
// Wrap packs a Notification instance into a new NotificationWrap.
Wrap(n Notification) (*NotificationWrap, error)
// Unwrap unpacks an instance of NotificationWrap into a new Notification.
Unwrap(nw *NotificationWrap) (Notification, error)
}
// A NotificationWrap wraps a Notification into something that can be stored in
// the database. A NotificationWrapper has to be used to convert it into a
// Notification.
type NotificationWrap struct {
Type string
Data string
}
// DefaultWrapper is an implementation of NotificationWrapper that supports
// NewVulnerabilityNotification notifications.
type DefaultWrapper struct{}
func (w *DefaultWrapper) Wrap(n Notification) (*NotificationWrap, error) {
data, err := json.Marshal(n)
if err != nil {
log.Warningf("could not marshal notification [ID: %s, Type: %s]: %s", n.GetName(), n.GetType(), err)
return nil, cerrors.NewBadRequestError("could not marshal notification with DefaultWrapper")
}
return &NotificationWrap{Type: n.GetType(), Data: string(data)}, nil
}
func (w *DefaultWrapper) Unwrap(nw *NotificationWrap) (Notification, error) {
var v Notification
// Create struct depending on the type
switch nw.Type {
case "NewVulnerabilityNotification":
v = &NewVulnerabilityNotification{}
case "VulnerabilityPriorityIncreasedNotification":
v = &VulnerabilityPriorityIncreasedNotification{}
case "VulnerabilityPackageChangedNotification":
v = &VulnerabilityPackageChangedNotification{}
default:
log.Warningf("could not unwrap notification [Type: %s]: unknown type for DefaultWrapper", nw.Type)
return nil, cerrors.NewBadRequestError("could not unwrap notification")
}
// Unmarshal notification
err := json.Unmarshal([]byte(nw.Data), v)
if err != nil {
log.Warningf("could not unmarshal notification with DefaultWrapper [Type: %s]: %s", nw.Type, err)
return nil, cerrors.NewBadRequestError("could not unmarshal notification")
}
return v, nil
}
// GetDefaultNotificationWrapper returns the default wrapper
func GetDefaultNotificationWrapper() NotificationWrapper {
return &DefaultWrapper{}
}
// A NewVulnerabilityNotification is a notification that informs about a new
// vulnerability and contains all the layers that introduce that vulnerability
type NewVulnerabilityNotification struct {
VulnerabilityID string
}
func (n *NewVulnerabilityNotification) GetName() string {
return n.VulnerabilityID
}
func (n *NewVulnerabilityNotification) GetType() string {
return "NewVulnerabilityNotification"
}
func (n *NewVulnerabilityNotification) GetContent() (interface{}, error) {
// This notification is about a new vulnerability
// Returns the list of layers that introduce this vulnerability
// Find vulnerability.
vulnerability, err := FindOneVulnerability(n.VulnerabilityID, []string{FieldVulnerabilityID, FieldVulnerabilityLink, FieldVulnerabilityPriority, FieldVulnerabilityDescription, FieldVulnerabilityFixedIn})
if err != nil {
return []byte{}, err
}
abstractVulnerability, err := vulnerability.ToAbstractVulnerability()
if err != nil {
return []byte{}, err
}
layers, err := FindAllLayersIntroducingVulnerability(n.VulnerabilityID, []string{FieldLayerID})
if err != nil {
return []byte{}, err
}
layersIDs := []string{} // empty slice, not null
for _, l := range layers {
layersIDs = append(layersIDs, l.ID)
}
return struct {
Vulnerability *AbstractVulnerability
IntroducingLayersIDs []string
}{
Vulnerability: abstractVulnerability,
IntroducingLayersIDs: layersIDs,
}, nil
}
// A VulnerabilityPriorityIncreasedNotification is a notification that informs
// about the fact that the priority of a vulnerability increased
// vulnerability and contains all the layers that introduce that vulnerability.
type VulnerabilityPriorityIncreasedNotification struct {
VulnerabilityID string
OldPriority, NewPriority types.Priority
}
func (n *VulnerabilityPriorityIncreasedNotification) GetName() string {
return n.VulnerabilityID
}
func (n *VulnerabilityPriorityIncreasedNotification) GetType() string {
return "VulnerabilityPriorityIncreasedNotification"
}
func (n *VulnerabilityPriorityIncreasedNotification) GetContent() (interface{}, error) {
// Returns the list of layers that introduce this vulnerability
// And both the old and new priorities
// Find vulnerability.
vulnerability, err := FindOneVulnerability(n.VulnerabilityID, []string{FieldVulnerabilityID, FieldVulnerabilityLink, FieldVulnerabilityPriority, FieldVulnerabilityDescription, FieldVulnerabilityFixedIn})
if err != nil {
return []byte{}, err
}
abstractVulnerability, err := vulnerability.ToAbstractVulnerability()
if err != nil {
return []byte{}, err
}
layers, err := FindAllLayersIntroducingVulnerability(n.VulnerabilityID, []string{FieldLayerID})
if err != nil {
return []byte{}, err
}
layersIDs := []string{} // empty slice, not null
for _, l := range layers {
layersIDs = append(layersIDs, l.ID)
}
return struct {
Vulnerability *AbstractVulnerability
OldPriority, NewPriority types.Priority
IntroducingLayersIDs []string
}{
Vulnerability: abstractVulnerability,
OldPriority: n.OldPriority,
NewPriority: n.NewPriority,
IntroducingLayersIDs: layersIDs,
}, nil
}
// A VulnerabilityPackageChangedNotification is a notification that informs that
// an existing vulnerability's fixed package list has been updated and may not
// affect some layers anymore or may affect new layers.
type VulnerabilityPackageChangedNotification struct {
VulnerabilityID string
AddedFixedInNodes, RemovedFixedInNodes []string
}
func (n *VulnerabilityPackageChangedNotification) GetName() string {
return n.VulnerabilityID
}
func (n *VulnerabilityPackageChangedNotification) GetType() string {
return "VulnerabilityPackageChangedNotification"
}
func (n *VulnerabilityPackageChangedNotification) GetContent() (interface{}, error) {
// Returns the removed and added packages as well as the layers that
// introduced the vulnerability in the past but don't anymore because of the
// removed packages and the layers that now introduce the vulnerability
// because of the added packages
// Find vulnerability.
vulnerability, err := FindOneVulnerability(n.VulnerabilityID, []string{FieldVulnerabilityID, FieldVulnerabilityLink, FieldVulnerabilityPriority, FieldVulnerabilityDescription, FieldVulnerabilityFixedIn})
if err != nil {
return []byte{}, err
}
abstractVulnerability, err := vulnerability.ToAbstractVulnerability()
if err != nil {
return []byte{}, err
}
// First part of the answer : added/removed packages
addedPackages, err := FindAllPackagesByNodes(n.AddedFixedInNodes, []string{FieldPackageOS, FieldPackageName, FieldPackageVersion, FieldPackagePreviousVersion})
if err != nil {
return []byte{}, err
}
removedPackages, err := FindAllPackagesByNodes(n.RemovedFixedInNodes, []string{FieldPackageOS, FieldPackageName, FieldPackageVersion, FieldPackagePreviousVersion})
if err != nil {
return []byte{}, err
}
// Second part of the answer
var addedPackagesPreviousVersions []string
for _, pkg := range addedPackages {
previousVersions, err := pkg.PreviousVersions([]string{})
if err != nil {
return []*Layer{}, err
}
for _, version := range previousVersions {
addedPackagesPreviousVersions = append(addedPackagesPreviousVersions, version.Node)
}
}
var removedPackagesPreviousVersions []string
for _, pkg := range removedPackages {
previousVersions, err := pkg.PreviousVersions([]string{})
if err != nil {
return []*Layer{}, err
}
for _, version := range previousVersions {
removedPackagesPreviousVersions = append(removedPackagesPreviousVersions, version.Node)
}
}
newIntroducingLayers, err := FindAllLayersByAddedPackageNodes(addedPackagesPreviousVersions, []string{FieldLayerID})
if err != nil {
return []byte{}, err
}
formerIntroducingLayers, err := FindAllLayersByAddedPackageNodes(removedPackagesPreviousVersions, []string{FieldLayerID})
if err != nil {
return []byte{}, err
}
newIntroducingLayersIDs := []string{} // empty slice, not null
for _, l := range newIntroducingLayers {
newIntroducingLayersIDs = append(newIntroducingLayersIDs, l.ID)
}
formerIntroducingLayersIDs := []string{} // empty slice, not null
for _, l := range formerIntroducingLayers {
formerIntroducingLayersIDs = append(formerIntroducingLayersIDs, l.ID)
}
// Remove layers which appears both in new and former lists (eg. case of updated packages but still vulnerable)
filteredNewIntroducingLayersIDs := utils.CompareStringLists(newIntroducingLayersIDs, formerIntroducingLayersIDs)
filteredFormerIntroducingLayersIDs := utils.CompareStringLists(formerIntroducingLayersIDs, newIntroducingLayersIDs)
return struct {
Vulnerability *AbstractVulnerability
AddedAffectedPackages, RemovedAffectedPackages []*AbstractPackage
NewIntroducingLayersIDs, FormerIntroducingLayerIDs []string
}{
Vulnerability: abstractVulnerability,
AddedAffectedPackages: PackagesToAbstractPackages(addedPackages),
RemovedAffectedPackages: PackagesToAbstractPackages(removedPackages),
NewIntroducingLayersIDs: filteredNewIntroducingLayersIDs,
FormerIntroducingLayerIDs: filteredFormerIntroducingLayersIDs,
}, nil
}
// InsertNotifications stores multiple Notification in the database
// It uses the given NotificationWrapper to convert these notifications to
// something that can be stored in the database.
func InsertNotifications(notifications []Notification, wrapper NotificationWrapper) error {
if len(notifications) == 0 {
return nil
}
// Do not send notifications if there are too many of them (first update for example)
if len(notifications) > maxNotifications {
log.Noticef("Ignoring %d notifications", len(notifications))
return nil
}
// Initialize transaction
t := cayley.NewTransaction()
// Iterate over all the vulnerabilities we need to insert
for _, notification := range notifications {
// Wrap notification
wrappedNotification, err := wrapper.Wrap(notification)
if err != nil {
return err
}
node := "notification:" + uuid.New()
t.AddQuad(cayley.Quad(node, FieldIs, "notification", ""))
t.AddQuad(cayley.Quad(node, "type", wrappedNotification.Type, ""))
t.AddQuad(cayley.Quad(node, "data", wrappedNotification.Data, ""))
t.AddQuad(cayley.Quad(node, "isSent", strconv.FormatBool(false), ""))
}
// Apply transaction
if err := store.ApplyTransaction(t); err != nil {
log.Errorf("failed transaction (InsertNotifications): %s", err)
return ErrTransaction
}
return nil
}
// FindOneNotificationToSend finds and returns a notification that is not sent
// yet and not locked. Returns nil if there is none.
func FindOneNotificationToSend(wrapper NotificationWrapper) (string, Notification, error) {
it, _ := cayley.StartPath(store, "notification").In(FieldIs).Has("isSent", strconv.FormatBool(false)).Except(getLockedNodes()).Save("type", "type").Save("data", "data").BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
tags := make(map[string]graph.Value)
it.TagResults(tags)
notification, err := wrapper.Unwrap(&NotificationWrap{Type: store.NameOf(tags["type"]), Data: store.NameOf(tags["data"])})
if err != nil {
return "", nil, err
}
return store.NameOf(it.Result()), notification, nil
}
if it.Err() != nil {
log.Errorf("failed query in FindOneNotificationToSend: %s", it.Err())
return "", nil, ErrBackendException
}
return "", nil, nil
}
// CountNotificationsToSend returns the number of pending notifications
// Note that it also count the locked notifications.
func CountNotificationsToSend() (int, error) {
c := 0
it, _ := cayley.StartPath(store, "notification").In(FieldIs).Has("isSent", strconv.FormatBool(false)).BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
c = c + 1
}
if it.Err() != nil {
log.Errorf("failed query in CountNotificationsToSend: %s", it.Err())
return 0, ErrBackendException
}
return c, nil
}
// MarkNotificationAsSent marks a notification as sent.
func MarkNotificationAsSent(node string) {
// Initialize transaction
t := cayley.NewTransaction()
t.RemoveQuad(cayley.Quad(node, "isSent", strconv.FormatBool(false), ""))
t.AddQuad(cayley.Quad(node, "isSent", strconv.FormatBool(true), ""))
// Apply transaction
store.ApplyTransaction(t)
}

View File

@ -0,0 +1,144 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"encoding/json"
"fmt"
"reflect"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
type TestWrapper struct{}
func (w *TestWrapper) Wrap(n Notification) (*NotificationWrap, error) {
data, err := json.Marshal(n)
if err != nil {
return nil, err
}
return &NotificationWrap{Type: n.GetType(), Data: string(data)}, nil
}
func (w *TestWrapper) Unwrap(nw *NotificationWrap) (Notification, error) {
var v Notification
switch nw.Type {
case "ntest1":
v = &NotificationTest1{}
case "ntest2":
v = &NotificationTest2{}
default:
return nil, fmt.Errorf("Could not Unwrap NotificationWrapper [Type: %s, Data: %s]: Unknown notification type.", nw.Type, nw.Data)
}
err := json.Unmarshal([]byte(nw.Data), v)
return v, err
}
type NotificationTest1 struct {
Test1 string
}
func (n NotificationTest1) GetName() string {
return n.Test1
}
func (n NotificationTest1) GetType() string {
return "ntest1"
}
func (n NotificationTest1) GetContent() (interface{}, error) {
return struct{ Test1 string }{Test1: n.Test1}, nil
}
type NotificationTest2 struct {
Test2 string
}
func (n NotificationTest2) GetName() string {
return n.Test2
}
func (n NotificationTest2) GetType() string {
return "ntest2"
}
func (n NotificationTest2) GetContent() (interface{}, error) {
return struct{ Test2 string }{Test2: n.Test2}, nil
}
func TestNotification(t *testing.T) {
Open("memstore", "")
defer Close()
wrapper := &TestWrapper{}
// Insert two notifications of different types
n1 := &NotificationTest1{Test1: "test1"}
n2 := &NotificationTest2{Test2: "test2"}
err := InsertNotifications([]Notification{n1, n2}, &TestWrapper{})
assert.Nil(t, err)
// Count notifications to send
c, err := CountNotificationsToSend()
assert.Nil(t, err)
assert.Equal(t, 2, c)
foundN1 := false
foundN2 := false
// Select the first one
node, n, err := FindOneNotificationToSend(wrapper)
assert.Nil(t, err)
if assert.NotNil(t, n) {
if reflect.DeepEqual(n1, n) {
foundN1 = true
} else if reflect.DeepEqual(n2, n) {
foundN2 = true
} else {
assert.Fail(t, "did not find any expected notification")
return
}
}
// Mark the first one as sent
MarkNotificationAsSent(node)
// Count notifications to send
c, err = CountNotificationsToSend()
assert.Nil(t, err)
assert.Equal(t, 1, c)
// Select again
node, n, err = FindOneNotificationToSend(wrapper)
assert.Nil(t, err)
if foundN1 {
assert.Equal(t, n2, n)
} else if foundN2 {
assert.Equal(t, n1, n)
}
// Lock the second one
Lock(node, time.Minute, "TestNotification")
// Select again
_, n, err = FindOneNotificationToSend(wrapper)
assert.Nil(t, err)
assert.Equal(t, nil, n)
}

44
database/os_mapping.go Normal file
View File

@ -0,0 +1,44 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
// DebianReleasesMapping translates Debian code names and class names to version numbers
// TODO That should probably be stored in the database or in a file
var DebianReleasesMapping = map[string]string{
// Code names
"squeeze": "6",
"wheezy": "7",
"jessie": "8",
"stretch": "9",
"sid": "unstable",
// Class names
"oldstable": "7",
"stable": "8",
"testing": "9",
"unstable": "unstable",
}
// UbuntuReleasesMapping translates Ubuntu code names to version numbers
// TODO That should probably be stored in the database or in a file
var UbuntuReleasesMapping = map[string]string{
"precise": "12.04",
"quantal": "12.10",
"raring": "13.04",
"trusty": "14.04",
"utopic": "14.10",
"vivid": "15.04",
"wily": "15.10",
}

485
database/package.go Normal file
View File

@ -0,0 +1,485 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"sort"
"github.com/coreos/quay-sec/utils"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/coreos/quay-sec/utils/types"
"github.com/google/cayley"
"github.com/google/cayley/graph"
"github.com/google/cayley/graph/path"
)
const (
FieldPackageIsValue = "package"
FieldPackageOS = "os"
FieldPackageName = "name"
FieldPackageVersion = "version"
FieldPackageNextVersion = "nextVersion"
FieldPackagePreviousVersion = "previousVersion"
insertPackagesBatchSize = 5
)
var FieldPackageAll = []string{FieldPackageOS, FieldPackageName, FieldPackageVersion, FieldPackageNextVersion, FieldPackagePreviousVersion}
// Package represents a package
type Package struct {
Node string `json:"-"`
OS string
Name string
Version types.Version
NextVersionNode string `json:"-"`
PreviousVersionNode string `json:"-"`
}
// GetNode returns an unique identifier for the graph node
// Requires the key fields: OS, Name, Version
func (p *Package) GetNode() string {
return FieldPackageIsValue + ":" + utils.Hash(p.Key())
}
// Key returns an unique string defining p
// Requires the key fields: OS, Name, Version
func (p *Package) Key() string {
return p.OS + ":" + p.Name + ":" + p.Version.String()
}
// Branch returns an unique string defined the Branch of p (os, name)
// Requires the key fields: OS, Name
func (p *Package) Branch() string {
return p.OS + ":" + p.Name
}
// AbstractPackage is a package that abstract types.MaxVersion by modifying
// using a AllVersion boolean field and renaming Version to BeforeVersion
// which makes more sense for an usage with a Vulnerability
type AbstractPackage struct {
OS string
Name string
AllVersions bool
BeforeVersion types.Version
}
// PackagesToAbstractPackages converts several Packages to AbstractPackages
func PackagesToAbstractPackages(packages []*Package) (abstractPackages []*AbstractPackage) {
for _, p := range packages {
ap := &AbstractPackage{OS: p.OS, Name: p.Name}
if p.Version != types.MaxVersion {
ap.BeforeVersion = p.Version
} else {
ap.AllVersions = true
}
abstractPackages = append(abstractPackages, ap)
}
return
}
// AbstractPackagesToPackages converts several AbstractPackages to Packages
func AbstractPackagesToPackages(abstractPackages []*AbstractPackage) (packages []*Package) {
for _, ap := range abstractPackages {
p := &Package{OS: ap.OS, Name: ap.Name}
if ap.AllVersions {
p.Version = types.MaxVersion
} else {
p.Version = ap.BeforeVersion
}
packages = append(packages, p)
}
return
}
// InsertPackages inserts several packages in the database in one transaction
// Packages are stored in linked lists, one per Branch. Each linked list has a start package and an end package defined with types.MinVersion/types.MaxVersion versions
//
// OS, Name and Version fields have to be specified.
// If the insertion is successfull, the Node field is filled and represents the graph node identifier.
func InsertPackages(packageParameters []*Package) error {
if len(packageParameters) == 0 {
return nil
}
// Verify parameters
for _, pkg := range packageParameters {
if pkg.OS == "" || pkg.Name == "" || pkg.Version.String() == "" {
log.Warningf("could not insert an incomplete package [OS: %s, Name: %s, Version: %s]", pkg.OS, pkg.Name, pkg.Version)
return cerrors.NewBadRequestError("could not insert an incomplete package")
}
}
// Create required data structures
t := cayley.NewTransaction()
packagesInTransaction := 0
cachedPackagesByBranch := make(map[string]map[string]*Package)
// Iterate over all the packages we need to insert
for _, packageParameter := range packageParameters {
branch := packageParameter.Branch()
// Is the package already existing ?
if _, branchExistsLocally := cachedPackagesByBranch[branch]; branchExistsLocally {
if pkg, _ := cachedPackagesByBranch[branch][packageParameter.Key()]; pkg != nil {
packageParameter.Node = pkg.Node
continue
}
} else {
cachedPackagesByBranch[branch] = make(map[string]*Package)
}
pkg, err := FindOnePackage(packageParameter.OS, packageParameter.Name, packageParameter.Version, []string{})
if err != nil && err != cerrors.ErrNotFound {
return err
}
if pkg != nil {
packageParameter.Node = pkg.Node
continue
}
// Get all packages of the same branch (both from local cache and database)
branchPackages, err := FindAllPackagesByBranch(packageParameter.OS, packageParameter.Name, []string{FieldPackageOS, FieldPackageName, FieldPackageVersion, FieldPackageNextVersion})
if err != nil {
return err
}
for _, p := range cachedPackagesByBranch[branch] {
branchPackages = append(branchPackages, p)
}
if len(branchPackages) == 0 {
// The branch does not exist yet
insertingStartPackage := packageParameter.Version == types.MinVersion
insertingEndPackage := packageParameter.Version == types.MaxVersion
// Create and insert a end package
endPackage := &Package{
OS: packageParameter.OS,
Name: packageParameter.Name,
Version: types.MaxVersion,
}
endPackage.Node = endPackage.GetNode()
cachedPackagesByBranch[branch][endPackage.Key()] = endPackage
t.AddQuad(cayley.Quad(endPackage.Node, FieldIs, FieldPackageIsValue, ""))
t.AddQuad(cayley.Quad(endPackage.Node, FieldPackageOS, endPackage.OS, ""))
t.AddQuad(cayley.Quad(endPackage.Node, FieldPackageName, endPackage.Name, ""))
t.AddQuad(cayley.Quad(endPackage.Node, FieldPackageVersion, endPackage.Version.String(), ""))
t.AddQuad(cayley.Quad(endPackage.Node, FieldPackageNextVersion, "", ""))
// Create the inserted package if it is different than a start/end package
var newPackage *Package
if !insertingStartPackage && !insertingEndPackage {
newPackage = &Package{
OS: packageParameter.OS,
Name: packageParameter.Name,
Version: packageParameter.Version,
}
newPackage.Node = newPackage.GetNode()
cachedPackagesByBranch[branch][newPackage.Key()] = newPackage
t.AddQuad(cayley.Quad(newPackage.Node, FieldIs, FieldPackageIsValue, ""))
t.AddQuad(cayley.Quad(newPackage.Node, FieldPackageOS, newPackage.OS, ""))
t.AddQuad(cayley.Quad(newPackage.Node, FieldPackageName, newPackage.Name, ""))
t.AddQuad(cayley.Quad(newPackage.Node, FieldPackageVersion, newPackage.Version.String(), ""))
t.AddQuad(cayley.Quad(newPackage.Node, FieldPackageNextVersion, endPackage.Node, ""))
packageParameter.Node = newPackage.Node
}
// Create and insert a start package
startPackage := &Package{
OS: packageParameter.OS,
Name: packageParameter.Name,
Version: types.MinVersion,
}
startPackage.Node = startPackage.GetNode()
cachedPackagesByBranch[branch][startPackage.Key()] = startPackage
t.AddQuad(cayley.Quad(startPackage.Node, FieldIs, FieldPackageIsValue, ""))
t.AddQuad(cayley.Quad(startPackage.Node, FieldPackageOS, startPackage.OS, ""))
t.AddQuad(cayley.Quad(startPackage.Node, FieldPackageName, startPackage.Name, ""))
t.AddQuad(cayley.Quad(startPackage.Node, FieldPackageVersion, startPackage.Version.String(), ""))
if !insertingStartPackage && !insertingEndPackage {
t.AddQuad(cayley.Quad(startPackage.Node, FieldPackageNextVersion, newPackage.Node, ""))
} else {
t.AddQuad(cayley.Quad(startPackage.Node, FieldPackageNextVersion, endPackage.Node, ""))
}
// Set package node
if insertingEndPackage {
packageParameter.Node = endPackage.Node
} else if insertingStartPackage {
packageParameter.Node = startPackage.Node
}
} else {
// The branch already exists
// Create the package
newPackage := &Package{OS: packageParameter.OS, Name: packageParameter.Name, Version: packageParameter.Version}
newPackage.Node = "package:" + utils.Hash(newPackage.Key())
cachedPackagesByBranch[branch][newPackage.Key()] = newPackage
packageParameter.Node = newPackage.Node
t.AddQuad(cayley.Quad(newPackage.Node, FieldIs, FieldPackageIsValue, ""))
t.AddQuad(cayley.Quad(newPackage.Node, FieldPackageOS, newPackage.OS, ""))
t.AddQuad(cayley.Quad(newPackage.Node, FieldPackageName, newPackage.Name, ""))
t.AddQuad(cayley.Quad(newPackage.Node, FieldPackageVersion, newPackage.Version.String(), ""))
// Sort branchPackages by version (including the new package)
branchPackages = append(branchPackages, newPackage)
sort.Sort(ByVersion(branchPackages))
// Find my prec/succ GraphID in the sorted slice now
newPackageKey := newPackage.Key()
var pred, succ *Package
var found bool
for _, p := range branchPackages {
equal := p.Key() == newPackageKey
if !equal && !found {
pred = p
} else if found {
succ = p
break
} else if equal {
found = true
continue
}
}
if pred == nil || succ == nil {
log.Warningf("could not find any package predecessor/successor of: [OS: %s, Name: %s, Version: %s].", packageParameter.OS, packageParameter.Name, packageParameter.Version)
return cerrors.NewBadRequestError("could not find package predecessor/successor")
}
// Link the new packages with the branch
t.RemoveQuad(cayley.Quad(pred.Node, FieldPackageNextVersion, succ.Node, ""))
pred.NextVersionNode = newPackage.Node
t.AddQuad(cayley.Quad(pred.Node, FieldPackageNextVersion, newPackage.Node, ""))
newPackage.NextVersionNode = succ.Node
t.AddQuad(cayley.Quad(newPackage.Node, FieldPackageNextVersion, succ.Node, ""))
}
packagesInTransaction = packagesInTransaction + 1
// Apply transaction
if packagesInTransaction >= insertPackagesBatchSize {
if err := store.ApplyTransaction(t); err != nil {
log.Errorf("failed transaction (InsertPackages): %s", err)
return ErrTransaction
}
t = cayley.NewTransaction()
cachedPackagesByBranch = make(map[string]map[string]*Package)
packagesInTransaction = 0
}
}
// Apply transaction
if packagesInTransaction > 0 {
if err := store.ApplyTransaction(t); err != nil {
log.Errorf("failed transaction (InsertPackages): %s", err)
return ErrTransaction
}
}
// Return
return nil
}
// FindOnePackage finds and returns a single package having the given OS, name and version, selecting the specified fields
func FindOnePackage(OS, name string, version types.Version, selectedFields []string) (*Package, error) {
packageParameter := Package{OS: OS, Name: name, Version: version}
p, err := toPackages(cayley.StartPath(store, packageParameter.GetNode()).Has(FieldIs, FieldPackageIsValue), selectedFields)
if err != nil {
return nil, err
}
if len(p) == 1 {
return p[0], nil
}
if len(p) > 1 {
log.Errorf("found multiple packages with identical data [OS: %s, Name: %s, Version: %s]", OS, name, version)
return nil, ErrInconsistent
}
return nil, cerrors.ErrNotFound
}
// FindAllPackagesByNodes finds and returns all packages given by their nodes, selecting the specified fields
func FindAllPackagesByNodes(nodes []string, selectedFields []string) ([]*Package, error) {
if len(nodes) == 0 {
log.Warning("could not FindAllPackagesByNodes with an empty nodes array.")
return []*Package{}, nil
}
return toPackages(cayley.StartPath(store, nodes...).Has(FieldIs, FieldPackageIsValue), selectedFields)
}
// FindAllPackagesByBranch finds and returns all packages that belong to the given Branch, selecting the specified fields
func FindAllPackagesByBranch(OS, name string, selectedFields []string) ([]*Package, error) {
return toPackages(cayley.StartPath(store, name).In(FieldPackageName).Has(FieldPackageOS, OS), selectedFields)
}
// toPackages converts a path leading to one or multiple packages to Package structs, selecting the specified fields
func toPackages(path *path.Path, selectedFields []string) ([]*Package, error) {
var packages []*Package
var err error
saveFields(path, selectedFields, []string{FieldPackagePreviousVersion})
it, _ := path.BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
tags := make(map[string]graph.Value)
it.TagResults(tags)
pkg := Package{Node: store.NameOf(it.Result())}
for _, selectedField := range selectedFields {
switch selectedField {
case FieldPackageOS:
pkg.OS = store.NameOf(tags[FieldPackageOS])
case FieldPackageName:
pkg.Name = store.NameOf(tags[FieldPackageName])
case FieldPackageVersion:
pkg.Version, err = types.NewVersion(store.NameOf(tags[FieldPackageVersion]))
if err != nil {
log.Warningf("could not parse version of package %s: %s", pkg.Node, err.Error())
}
case FieldPackageNextVersion:
pkg.NextVersionNode = store.NameOf(tags[FieldPackageNextVersion])
case FieldPackagePreviousVersion:
pkg.PreviousVersionNode, err = toValue(cayley.StartPath(store, pkg.Node).In(FieldPackageNextVersion))
if err != nil {
log.Warningf("could not get previousVersion on package %s: %s.", pkg.Node, err.Error())
return []*Package{}, ErrInconsistent
}
default:
panic("unknown selectedField")
}
}
packages = append(packages, &pkg)
}
if it.Err() != nil {
log.Errorf("failed query in toPackages: %s", it.Err())
return []*Package{}, ErrBackendException
}
return packages, nil
}
// NextVersion find and returns the package of the same branch that has a higher version number, selecting the specified fields
// It requires that FieldPackageNextVersion field has been selected on p
func (p *Package) NextVersion(selectedFields []string) (*Package, error) {
if p.NextVersionNode == "" {
return nil, nil
}
v, err := FindAllPackagesByNodes([]string{p.NextVersionNode}, selectedFields)
if err != nil {
return nil, err
}
if len(v) != 1 {
log.Errorf("found multiple packages when getting next version of package %s", p.Node)
return nil, ErrInconsistent
}
return v[0], nil
}
// NextVersions find and returns all the packages of the same branch that have
// a higher version number, selecting the specified fields
// It requires that FieldPackageNextVersion field has been selected on p
// The immediate higher version is listed first, and the special end-of-Branch package is last, p is not listed
func (p *Package) NextVersions(selectedFields []string) ([]*Package, error) {
var nextVersions []*Package
if !utils.Contains(FieldPackageNextVersion, selectedFields) {
selectedFields = append(selectedFields, FieldPackageNextVersion)
}
nextVersion, err := p.NextVersion(selectedFields)
if err != nil {
return []*Package{}, err
}
if nextVersion != nil {
nextVersions = append(nextVersions, nextVersion)
nextNextVersions, err := nextVersion.NextVersions(selectedFields)
if err != nil {
return []*Package{}, err
}
nextVersions = append(nextVersions, nextNextVersions...)
}
return nextVersions, nil
}
// PreviousVersion find and returns the package of the same branch that has an
// immediate lower version number, selecting the specified fields
// It requires that FieldPackagePreviousVersion field has been selected on p
func (p *Package) PreviousVersion(selectedFields []string) (*Package, error) {
if p.PreviousVersionNode == "" {
return nil, nil
}
v, err := FindAllPackagesByNodes([]string{p.PreviousVersionNode}, selectedFields)
if err != nil {
return nil, err
}
if len(v) == 0 {
return nil, nil
}
if len(v) != 1 {
log.Errorf("found multiple packages when getting previous version of package %s", p.Node)
return nil, ErrInconsistent
}
return v[0], nil
}
// PreviousVersions find and returns all the packages of the same branch that
// have a lower version number, selecting the specified fields
// It requires that FieldPackageNextVersion field has been selected on p
// The immediate lower version is listed first, and the special start-of-Branch
// package is last, p is not listed
func (p *Package) PreviousVersions(selectedFields []string) ([]*Package, error) {
var previousVersions []*Package
if !utils.Contains(FieldPackagePreviousVersion, selectedFields) {
selectedFields = append(selectedFields, FieldPackagePreviousVersion)
}
previousVersion, err := p.PreviousVersion(selectedFields)
if err != nil {
return []*Package{}, err
}
if previousVersion != nil {
previousVersions = append(previousVersions, previousVersion)
previousPreviousVersions, err := previousVersion.PreviousVersions(selectedFields)
if err != nil {
return []*Package{}, err
}
previousVersions = append(previousVersions, previousPreviousVersions...)
}
return previousVersions, nil
}
// ByVersion implements sort.Interface for []*Package based on the Version field
// It uses github.com/quentin-m/dpkgcomp internally and makes use of types.MinVersion/types.MaxVersion
type ByVersion []*Package
func (p ByVersion) Len() int { return len(p) }
func (p ByVersion) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
func (p ByVersion) Less(i, j int) bool { return p[i].Version.Compare(p[j].Version) < 0 }

193
database/package_test.go Normal file
View File

@ -0,0 +1,193 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"math/rand"
"sort"
"testing"
"time"
"github.com/coreos/quay-sec/utils/types"
"github.com/stretchr/testify/assert"
)
func TestPackage(t *testing.T) {
Open("memstore", "")
defer Close()
// Try to insert invalid packages
for _, invalidPkg := range []*Package{
&Package{OS: "", Name: "testpkg1", Version: types.NewVersionUnsafe("1.0")},
&Package{OS: "testOS", Name: "", Version: types.NewVersionUnsafe("1.0")},
&Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("")},
&Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("bad version")},
&Package{OS: "", Name: "", Version: types.NewVersionUnsafe("")},
} {
err := InsertPackages([]*Package{invalidPkg})
assert.Error(t, err)
}
// Insert a package
pkg1 := &Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("1.0")}
err := InsertPackages([]*Package{pkg1})
if assert.Nil(t, err) {
// Find the inserted package and verify its content
pkg1b, err := FindOnePackage(pkg1.OS, pkg1.Name, pkg1.Version, FieldPackageAll)
if assert.Nil(t, err) && assert.NotNil(t, pkg1b) {
assert.Equal(t, pkg1.Node, pkg1b.Node)
assert.Equal(t, pkg1.OS, pkg1b.OS)
assert.Equal(t, pkg1.Name, pkg1b.Name)
assert.Equal(t, pkg1.Version, pkg1b.Version)
}
// Find packages from the inserted branch and verify their content
// (the first one should be a start package, the second one the inserted one and the third one the end package)
pkgs1c, err := FindAllPackagesByBranch(pkg1.OS, pkg1.Name, FieldPackageAll)
if assert.Nil(t, err) && assert.Equal(t, 3, len(pkgs1c)) {
sort.Sort(ByVersion(pkgs1c))
assert.Equal(t, pkg1.OS, pkgs1c[0].OS)
assert.Equal(t, pkg1.Name, pkgs1c[0].Name)
assert.Equal(t, types.MinVersion, pkgs1c[0].Version)
assert.Equal(t, pkg1.OS, pkgs1c[1].OS)
assert.Equal(t, pkg1.Name, pkgs1c[1].Name)
assert.Equal(t, pkg1.Version, pkgs1c[1].Version)
assert.Equal(t, pkg1.OS, pkgs1c[2].OS)
assert.Equal(t, pkg1.Name, pkgs1c[2].Name)
assert.Equal(t, types.MaxVersion, pkgs1c[2].Version)
}
}
// Insert multiple packages in the same branch, one in another branch, insert local duplicates and database duplicates as well
pkg2 := []*Package{
&Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("0.8")},
&Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("0.9")},
&Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("1.0")}, // Already present in the database
&Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("1.1")},
&Package{OS: "testOS", Name: "testpkg2", Version: types.NewVersionUnsafe("1.0")}, // Another branch
&Package{OS: "testOS", Name: "testpkg2", Version: types.NewVersionUnsafe("1.0")}, // Local duplicates
}
nbInSameBranch := 4 + 2 // (start/end packages)
err = InsertPackages(shuffle(pkg2))
if assert.Nil(t, err) {
// Find packages from the inserted branch, verify their order and NextVersion / PreviousVersion
pkgs2b, err := FindAllPackagesByBranch("testOS", "testpkg1", FieldPackageAll)
if assert.Nil(t, err) && assert.Equal(t, nbInSameBranch, len(pkgs2b)) {
sort.Sort(ByVersion(pkgs2b))
for i := 0; i < nbInSameBranch; i = i + 1 {
if i == 0 {
assert.Equal(t, types.MinVersion, pkgs2b[0].Version)
} else if i < nbInSameBranch-2 {
assert.Equal(t, pkg2[i].Version, pkgs2b[i+1].Version)
nv, err := pkgs2b[i+1].NextVersion(FieldPackageAll)
assert.Nil(t, err)
assert.Equal(t, pkgs2b[i+2], nv)
if i > 0 {
pv, err := pkgs2b[i].PreviousVersion(FieldPackageAll)
assert.Nil(t, err)
assert.Equal(t, pkgs2b[i-1], pv)
} else {
pv, err := pkgs2b[i].PreviousVersion(FieldPackageAll)
assert.Nil(t, err)
assert.Nil(t, pv)
}
} else {
assert.Equal(t, types.MaxVersion, pkgs2b[nbInSameBranch-1].Version)
nv, err := pkgs2b[nbInSameBranch-1].NextVersion(FieldPackageAll)
assert.Nil(t, err)
assert.Nil(t, nv)
pv, err := pkgs2b[i].PreviousVersion(FieldPackageAll)
assert.Nil(t, err)
assert.Equal(t, pkgs2b[i-1], pv)
}
}
// NextVersions
nv, err := pkgs2b[0].NextVersions(FieldPackageAll)
if assert.Nil(t, err) && assert.Len(t, nv, nbInSameBranch-1) {
for i := 0; i < nbInSameBranch-1; i = i + 1 {
if i < nbInSameBranch-2 {
assert.Equal(t, pkg2[i].Version, nv[i].Version)
} else {
assert.Equal(t, types.MaxVersion, nv[i].Version)
}
}
}
// PreviousVersions
pv, err := pkgs2b[nbInSameBranch-1].PreviousVersions(FieldPackageAll)
if assert.Nil(t, err) && assert.Len(t, pv, nbInSameBranch-1) {
for i := 0; i < len(pv); i = i + 1 {
assert.Equal(t, pkgs2b[len(pkgs2b)-i-2], pv[i])
}
}
}
// Verify that the one we added which was already present in the database has the same node value (meaning that we just fetched it actually)
assert.Contains(t, pkg2, pkg1)
}
// Insert duplicated latest packages directly, ensure only one is actually inserted. Then insert another package in the branch and ensure that its next version is the latest one
pkg3a := &Package{OS: "testOS", Name: "testpkg3", Version: types.MaxVersion}
pkg3b := &Package{OS: "testOS", Name: "testpkg3", Version: types.MaxVersion}
pkg3c := &Package{OS: "testOS", Name: "testpkg3", Version: types.MaxVersion}
err1 := InsertPackages([]*Package{pkg3a, pkg3b})
err2 := InsertPackages([]*Package{pkg3c})
if assert.Nil(t, err1) && assert.Nil(t, err2) {
assert.Equal(t, pkg3a, pkg3b)
assert.Equal(t, pkg3b, pkg3c)
}
pkg4 := Package{OS: "testOS", Name: "testpkg3", Version: types.NewVersionUnsafe("1.0")}
InsertPackages([]*Package{&pkg4})
pkgs34, _ := FindAllPackagesByBranch("testOS", "testpkg3", FieldPackageAll)
if assert.Len(t, pkgs34, 3) {
sort.Sort(ByVersion(pkgs34))
assert.Equal(t, pkg4.Node, pkgs34[1].Node)
assert.Equal(t, pkg3a.Node, pkgs34[2].Node)
assert.Equal(t, pkg3a.Node, pkgs34[1].NextVersionNode)
}
// Insert two identical packages but with "different" versions
// The second version should be simplified to the first one
// Therefore, we should just have three packages (the inserted one and the start/end packages of the branch)
InsertPackages([]*Package{&Package{OS: "testOS", Name: "testdirtypkg", Version: types.NewVersionUnsafe("0.1")}})
InsertPackages([]*Package{&Package{OS: "testOS", Name: "testdirtypkg", Version: types.NewVersionUnsafe("0:0.1")}})
dirtypkgs, err := FindAllPackagesByBranch("testOS", "testdirtypkg", FieldPackageAll)
assert.Nil(t, err)
assert.Len(t, dirtypkgs, 3)
}
func shuffle(packageParameters []*Package) []*Package {
rand.Seed(int64(time.Now().Nanosecond()))
sPackage := make([]*Package, len(packageParameters))
copy(sPackage, packageParameters)
for i := len(sPackage) - 1; i > 0; i-- {
j := rand.Intn(i)
sPackage[i], sPackage[j] = sPackage[j], sPackage[i]
}
return sPackage
}

51
database/requests.go Normal file
View File

@ -0,0 +1,51 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import cerrors "github.com/coreos/quay-sec/utils/errors"
// FindAllLayersIntroducingVulnerability finds and returns the list of layers
// that introduce the given vulnerability (by its ID), selecting the specified fields
func FindAllLayersIntroducingVulnerability(vulnerabilityID string, selectedFields []string) ([]*Layer, error) {
// Find vulnerability
vulnerability, err := FindOneVulnerability(vulnerabilityID, []string{FieldVulnerabilityFixedIn})
if err != nil {
return []*Layer{}, err
}
if vulnerability == nil {
return []*Layer{}, cerrors.ErrNotFound
}
// Find FixedIn packages
fixedInPackages, err := FindAllPackagesByNodes(vulnerability.FixedInNodes, []string{FieldPackagePreviousVersion})
if err != nil {
return []*Layer{}, err
}
// Find all FixedIn packages's ancestors packages (which are therefore vulnerable to the vulnerability)
var vulnerablePackagesNodes []string
for _, pkg := range fixedInPackages {
previousVersions, err := pkg.PreviousVersions([]string{})
if err != nil {
return []*Layer{}, err
}
for _, version := range previousVersions {
vulnerablePackagesNodes = append(vulnerablePackagesNodes, version.Node)
}
}
// Return all the layers that add these packages
return FindAllLayersByAddedPackageNodes(vulnerablePackagesNodes, selectedFields)
}

387
database/vulnerability.go Normal file
View File

@ -0,0 +1,387 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"github.com/coreos/quay-sec/utils"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/coreos/quay-sec/utils/types"
"github.com/google/cayley"
"github.com/google/cayley/graph"
"github.com/google/cayley/graph/path"
)
const (
FieldVulnerabilityIsValue = "vulnerability"
FieldVulnerabilityID = "id"
FieldVulnerabilityLink = "link"
FieldVulnerabilityPriority = "priority"
FieldVulnerabilityDescription = "description"
FieldVulnerabilityFixedIn = "fixedIn"
)
var FieldVulnerabilityAll = []string{FieldVulnerabilityID, FieldVulnerabilityLink, FieldVulnerabilityPriority, FieldVulnerabilityDescription, FieldVulnerabilityFixedIn}
// Vulnerability represents a vulnerability that is fixed in some Packages
type Vulnerability struct {
Node string `json:"-"`
ID string
Link string
Priority types.Priority
Description string `json:",omitempty"`
FixedInNodes []string `json:"-"`
}
// GetNode returns an unique identifier for the graph node
// Requires the key field: ID
func (v *Vulnerability) GetNode() string {
return FieldVulnerabilityIsValue + ":" + utils.Hash(v.ID)
}
// ToAbstractVulnerability converts a Vulnerability into an
// AbstractVulnerability.
func (v *Vulnerability) ToAbstractVulnerability() (*AbstractVulnerability, error) {
// Find FixedIn packages.
fixedInPackages, err := FindAllPackagesByNodes(v.FixedInNodes, []string{FieldPackageOS, FieldPackageName, FieldPackageVersion})
if err != nil {
return nil, err
}
return &AbstractVulnerability{
ID: v.ID,
Link: v.Link,
Priority: v.Priority,
Description: v.Description,
AffectedPackages: PackagesToAbstractPackages(fixedInPackages),
}, nil
}
// AbstractVulnerability represents a Vulnerability as it is defined in the database
// package but exposes directly a list of AbstractPackage instead of
// nodes to packages.
type AbstractVulnerability struct {
ID string
Link string
Priority types.Priority
Description string
AffectedPackages []*AbstractPackage
}
// ToVulnerability converts an abstractVulnerability into
// a Vulnerability
func (av *AbstractVulnerability) ToVulnerability(fixedInNodes []string) *Vulnerability {
return &Vulnerability{
ID: av.ID,
Link: av.Link,
Priority: av.Priority,
Description: av.Description,
FixedInNodes: fixedInNodes,
}
}
// InsertVulnerabilities inserts or updates several vulnerabilities in the database in one transaction
// It ensures that a vulnerability can't be fixed by two packages belonging the same Branch.
// During an update, if the vulnerability was previously fixed by a version in a branch and a new package of that branch is specified, the previous one is deleted
// Otherwise, it simply adds the defined packages, there is currently no way to delete affected packages.
//
// ID, Link, Priority and FixedInNodes fields have to be specified. Description is optionnal.
func InsertVulnerabilities(vulnerabilities []*Vulnerability) ([]Notification, error) {
if len(vulnerabilities) == 0 {
return []Notification{}, nil
}
// Create required data structure
var err error
t := cayley.NewTransaction()
cachedVulnerabilities := make(map[string]*Vulnerability)
newVulnerabilityNotifications := make(map[string]*NewVulnerabilityNotification)
vulnerabilityPriorityIncreasedNotifications := make(map[string]*VulnerabilityPriorityIncreasedNotification)
vulnerabilityPackageChangedNotifications := make(map[string]*VulnerabilityPackageChangedNotification)
// Iterate over all the vulnerabilities we need to insert/update
for _, vulnerability := range vulnerabilities {
// Is the vulnerability already existing ?
existingVulnerability, _ := cachedVulnerabilities[vulnerability.ID]
if existingVulnerability == nil {
existingVulnerability, err = FindOneVulnerability(vulnerability.ID, FieldVulnerabilityAll)
if err != nil && err != cerrors.ErrNotFound {
return []Notification{}, err
}
if existingVulnerability != nil {
cachedVulnerabilities[vulnerability.ID] = existingVulnerability
}
}
// Don't allow inserting/updating a vulnerability which is fixed in two packages of the same branch
if len(vulnerability.FixedInNodes) > 0 {
fixedInPackages, err := FindAllPackagesByNodes(vulnerability.FixedInNodes, []string{FieldPackageOS, FieldPackageName})
if err != nil {
return []Notification{}, err
}
fixedInBranches := make(map[string]struct{})
for _, fixedInPackage := range fixedInPackages {
branch := fixedInPackage.Branch()
if _, branchExists := fixedInBranches[branch]; branchExists {
log.Warningf("could not insert vulnerability %s because it is fixed in two packages of the same branch", vulnerability.ID)
return []Notification{}, cerrors.NewBadRequestError("could not insert a vulnerability which is fixed in two packages of the same branch")
}
fixedInBranches[branch] = struct{}{}
}
}
// Insert/Update vulnerability
if existingVulnerability == nil {
// The vulnerability does not exist, create it
// Verify parameters
if vulnerability.ID == "" || vulnerability.Link == "" || vulnerability.Priority == "" {
log.Warningf("could not insert an incomplete vulnerability [ID: %s, Link: %s, Priority: %s]", vulnerability.ID, vulnerability.Link, vulnerability.Priority)
return []Notification{}, cerrors.NewBadRequestError("Could not insert an incomplete vulnerability")
}
if !vulnerability.Priority.IsValid() {
log.Warningf("could not insert a vulnerability which has an invalid priority [ID: %s, Link: %s, Priority: %s]. Valid priorities are: %v.", vulnerability.ID, vulnerability.Link, vulnerability.Priority, types.Priorities)
return []Notification{}, cerrors.NewBadRequestError("Could not insert a vulnerability which has an invalid priority")
}
if len(vulnerability.FixedInNodes) == 0 {
log.Warningf("could not insert a vulnerability which doesn't affect any package [ID: %s].", vulnerability.ID)
return []Notification{}, cerrors.NewBadRequestError("could not insert a vulnerability which doesn't affect any package")
}
// Insert it
vulnerability.Node = vulnerability.GetNode()
cachedVulnerabilities[vulnerability.ID] = vulnerability
t.AddQuad(cayley.Quad(vulnerability.Node, FieldIs, FieldVulnerabilityIsValue, ""))
t.AddQuad(cayley.Quad(vulnerability.Node, FieldVulnerabilityID, vulnerability.ID, ""))
t.AddQuad(cayley.Quad(vulnerability.Node, FieldVulnerabilityLink, vulnerability.Link, ""))
t.AddQuad(cayley.Quad(vulnerability.Node, FieldVulnerabilityPriority, string(vulnerability.Priority), ""))
t.AddQuad(cayley.Quad(vulnerability.Node, FieldVulnerabilityDescription, vulnerability.Description, ""))
for _, p := range vulnerability.FixedInNodes {
t.AddQuad(cayley.Quad(vulnerability.Node, FieldVulnerabilityFixedIn, p, ""))
}
// Add a notification
newVulnerabilityNotifications[vulnerability.ID] = &NewVulnerabilityNotification{VulnerabilityID: vulnerability.ID}
} else {
// The vulnerability already exists, update it
if vulnerability.Link != "" && existingVulnerability.Link != vulnerability.Link {
t.RemoveQuad(cayley.Quad(existingVulnerability.Node, FieldVulnerabilityLink, existingVulnerability.Link, ""))
t.AddQuad(cayley.Quad(existingVulnerability.Node, FieldVulnerabilityLink, vulnerability.Link, ""))
existingVulnerability.Link = vulnerability.Link
}
if vulnerability.Priority != "" && vulnerability.Priority != types.Unknown && existingVulnerability.Priority != vulnerability.Priority {
if !vulnerability.Priority.IsValid() {
log.Warningf("could not update a vulnerability which has an invalid priority [ID: %s, Link: %s, Priority: %s]. Valid priorities are: %v.", vulnerability.ID, vulnerability.Link, vulnerability.Priority, types.Priorities)
return []Notification{}, cerrors.NewBadRequestError("Could not update a vulnerability which has an invalid priority")
}
// Add a notification about the priority change if the new priority is higher and the vulnerability is not new
if vulnerability.Priority.Compare(existingVulnerability.Priority) > 0 {
if _, newVulnerabilityNotificationExists := newVulnerabilityNotifications[vulnerability.ID]; !newVulnerabilityNotificationExists {
// Any priorityChangeNotification already ?
if existingPriorityNotification, _ := vulnerabilityPriorityIncreasedNotifications[vulnerability.ID]; existingPriorityNotification != nil {
// There is a priority change notification, replace it but keep the old priority field
vulnerabilityPriorityIncreasedNotifications[vulnerability.ID] = &VulnerabilityPriorityIncreasedNotification{OldPriority: existingPriorityNotification.OldPriority, NewPriority: vulnerability.Priority, VulnerabilityID: existingVulnerability.ID}
} else {
// No previous notification, just add a new one
vulnerabilityPriorityIncreasedNotifications[vulnerability.ID] = &VulnerabilityPriorityIncreasedNotification{OldPriority: existingVulnerability.Priority, NewPriority: vulnerability.Priority, VulnerabilityID: existingVulnerability.ID}
}
}
}
t.RemoveQuad(cayley.Quad(existingVulnerability.Node, FieldVulnerabilityPriority, string(existingVulnerability.Priority), ""))
t.AddQuad(cayley.Quad(existingVulnerability.Node, FieldVulnerabilityPriority, string(vulnerability.Priority), ""))
existingVulnerability.Priority = vulnerability.Priority
}
if vulnerability.Description != "" && existingVulnerability.Description != vulnerability.Description {
t.RemoveQuad(cayley.Quad(existingVulnerability.Node, FieldVulnerabilityDescription, existingVulnerability.Description, ""))
t.AddQuad(cayley.Quad(existingVulnerability.Node, FieldVulnerabilityDescription, vulnerability.Description, ""))
existingVulnerability.Description = vulnerability.Description
}
if len(vulnerability.FixedInNodes) > 0 && len(utils.CompareStringLists(vulnerability.FixedInNodes, existingVulnerability.FixedInNodes)) != 0 {
var removedNodes []string
var addedNodes []string
existingVulnerabilityFixedInPackages, err := FindAllPackagesByNodes(existingVulnerability.FixedInNodes, []string{FieldPackageOS, FieldPackageName, FieldPackageVersion})
if err != nil {
return []Notification{}, err
}
vulnerabilityFixedInPackages, err := FindAllPackagesByNodes(vulnerability.FixedInNodes, []string{FieldPackageOS, FieldPackageName, FieldPackageVersion})
if err != nil {
return []Notification{}, err
}
for _, p := range vulnerabilityFixedInPackages {
// Any already existing link ?
fixedInLinkAlreadyExists := false
for _, ep := range existingVulnerabilityFixedInPackages {
if *p == *ep {
// This exact link already exists, we won't insert it again
fixedInLinkAlreadyExists = true
} else if p.Branch() == ep.Branch() {
// A link to this package branch already exist and is not the same version, we will delete it
t.RemoveQuad(cayley.Quad(existingVulnerability.Node, FieldVulnerabilityFixedIn, ep.Node, ""))
var index int
for i, n := range existingVulnerability.FixedInNodes {
if n == ep.Node {
index = i
break
}
}
existingVulnerability.FixedInNodes = append(existingVulnerability.FixedInNodes[index:], existingVulnerability.FixedInNodes[index+1:]...)
removedNodes = append(removedNodes, ep.Node)
}
}
if fixedInLinkAlreadyExists == false {
t.AddQuad(cayley.Quad(existingVulnerability.Node, FieldVulnerabilityFixedIn, p.Node, ""))
existingVulnerability.FixedInNodes = append(existingVulnerability.FixedInNodes, p.Node)
addedNodes = append(addedNodes, p.Node)
}
}
// Add notification about the FixedIn modification if the vulnerability is not new
if len(removedNodes) > 0 || len(addedNodes) > 0 {
if _, newVulnerabilityNotificationExists := newVulnerabilityNotifications[vulnerability.ID]; !newVulnerabilityNotificationExists {
// Any VulnerabilityPackageChangedNotification already ?
if existingPackageNotification, _ := vulnerabilityPackageChangedNotifications[vulnerability.ID]; existingPackageNotification != nil {
// There is a priority change notification, add the packages modifications to it
existingPackageNotification.AddedFixedInNodes = append(existingPackageNotification.AddedFixedInNodes, addedNodes...)
existingPackageNotification.RemovedFixedInNodes = append(existingPackageNotification.RemovedFixedInNodes, removedNodes...)
} else {
// No previous notification, just add a new one
vulnerabilityPackageChangedNotifications[vulnerability.ID] = &VulnerabilityPackageChangedNotification{VulnerabilityID: vulnerability.ID, AddedFixedInNodes: addedNodes, RemovedFixedInNodes: removedNodes}
}
}
}
}
}
}
// Apply transaction
if err = store.ApplyTransaction(t); err != nil {
log.Errorf("failed transaction (InsertVulnerabilities): %s", err)
return []Notification{}, ErrTransaction
}
// Group all notifications
var allNotifications []Notification
for _, notification := range newVulnerabilityNotifications {
allNotifications = append(allNotifications, notification)
}
for _, notification := range vulnerabilityPriorityIncreasedNotifications {
allNotifications = append(allNotifications, notification)
}
for _, notification := range vulnerabilityPackageChangedNotifications {
allNotifications = append(allNotifications, notification)
}
return allNotifications, nil
}
// DeleteVulnerability deletes the vulnerability having the given ID
func DeleteVulnerability(id string) error {
vulnerability, err := FindOneVulnerability(id, FieldVulnerabilityAll)
if err != nil {
return err
}
t := cayley.NewTransaction()
t.RemoveQuad(cayley.Quad(vulnerability.Node, FieldVulnerabilityID, vulnerability.ID, ""))
t.RemoveQuad(cayley.Quad(vulnerability.Node, FieldVulnerabilityLink, vulnerability.Link, ""))
t.RemoveQuad(cayley.Quad(vulnerability.Node, FieldVulnerabilityPriority, string(vulnerability.Priority), ""))
t.RemoveQuad(cayley.Quad(vulnerability.Node, FieldVulnerabilityDescription, vulnerability.Description, ""))
for _, p := range vulnerability.FixedInNodes {
t.RemoveQuad(cayley.Quad(vulnerability.Node, FieldVulnerabilityFixedIn, p, ""))
}
if err := store.ApplyTransaction(t); err != nil {
log.Errorf("failed transaction (DeleteVulnerability): %s", err)
return ErrTransaction
}
return nil
}
// FindOneVulnerability finds and returns a single vulnerability having the given ID selecting the specified fields
func FindOneVulnerability(id string, selectedFields []string) (*Vulnerability, error) {
t := &Vulnerability{ID: id}
v, err := toVulnerabilities(cayley.StartPath(store, t.GetNode()).Has(FieldIs, FieldVulnerabilityIsValue), selectedFields)
if err != nil {
return nil, err
}
if len(v) == 1 {
return v[0], nil
}
if len(v) > 1 {
log.Errorf("found multiple vulnerabilities with identical ID [ID: %s]", id)
return nil, ErrInconsistent
}
return nil, cerrors.ErrNotFound
}
// FindAllVulnerabilitiesByFixedIn finds and returns all vulnerabilities that are fixed in the given packages (speficied by their nodes), selecting the specified fields
func FindAllVulnerabilitiesByFixedIn(nodes []string, selectedFields []string) ([]*Vulnerability, error) {
if len(nodes) == 0 {
log.Warning("Could not FindAllVulnerabilitiesByFixedIn with an empty nodes array.")
return []*Vulnerability{}, nil
}
return toVulnerabilities(cayley.StartPath(store, nodes...).In(FieldVulnerabilityFixedIn), selectedFields)
}
// toVulnerabilities converts a path leading to one or multiple vulnerabilities to Vulnerability structs, selecting the specified fields
func toVulnerabilities(path *path.Path, selectedFields []string) ([]*Vulnerability, error) {
var vulnerabilities []*Vulnerability
saveFields(path, selectedFields, []string{FieldVulnerabilityFixedIn})
it, _ := path.BuildIterator().Optimize()
defer it.Close()
for cayley.RawNext(it) {
tags := make(map[string]graph.Value)
it.TagResults(tags)
vulnerability := Vulnerability{Node: store.NameOf(it.Result())}
for _, selectedField := range selectedFields {
switch selectedField {
case FieldVulnerabilityID:
vulnerability.ID = store.NameOf(tags[FieldVulnerabilityID])
case FieldVulnerabilityLink:
vulnerability.Link = store.NameOf(tags[FieldVulnerabilityLink])
case FieldVulnerabilityPriority:
vulnerability.Priority = types.Priority(store.NameOf(tags[FieldVulnerabilityPriority]))
case FieldVulnerabilityDescription:
vulnerability.Description = store.NameOf(tags[FieldVulnerabilityDescription])
case FieldVulnerabilityFixedIn:
var err error
vulnerability.FixedInNodes, err = toValues(cayley.StartPath(store, vulnerability.Node).Out(FieldVulnerabilityFixedIn))
if err != nil {
log.Errorf("could not get fixedIn on vulnerability %s: %s.", vulnerability.Node, err.Error())
return []*Vulnerability{}, err
}
default:
panic("unknown selectedField")
}
}
vulnerabilities = append(vulnerabilities, &vulnerability)
}
if it.Err() != nil {
log.Errorf("failed query in toVulnerabilities: %s", it.Err())
return []*Vulnerability{}, ErrBackendException
}
return vulnerabilities, nil
}

View File

@ -0,0 +1,243 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package database
import (
"testing"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/coreos/quay-sec/utils/types"
"github.com/stretchr/testify/assert"
)
func TestVulnerability(t *testing.T) {
Open("memstore", "")
defer Close()
// Insert invalid vulnerabilities
for _, vulnerability := range []Vulnerability{
Vulnerability{ID: "", Link: "link1", Priority: types.Medium, FixedInNodes: []string{"pkg1"}},
Vulnerability{ID: "test1", Link: "", Priority: types.Medium, FixedInNodes: []string{"pkg1"}},
Vulnerability{ID: "test1", Link: "link1", Priority: "InvalidPriority", FixedInNodes: []string{"pkg1"}},
Vulnerability{ID: "test1", Link: "link1", Priority: types.Medium, FixedInNodes: []string{}},
} {
_, err := InsertVulnerabilities([]*Vulnerability{&vulnerability})
assert.Error(t, err)
}
// Some data
vuln1 := &Vulnerability{ID: "test1", Link: "link1", Priority: types.Medium, Description: "testDescription1", FixedInNodes: []string{"pkg1"}}
vuln2 := &Vulnerability{ID: "test2", Link: "link2", Priority: types.High, Description: "testDescription2", FixedInNodes: []string{"pkg1", "pkg2"}}
vuln3 := &Vulnerability{ID: "test3", Link: "link3", Priority: types.High, FixedInNodes: []string{"pkg3"}} // Empty description
// Insert some vulnerabilities
_, err := InsertVulnerabilities([]*Vulnerability{vuln1, vuln2, vuln3})
if assert.Nil(t, err) {
// Find one of the vulnerabilities we just inserted and verify its content
v1, err := FindOneVulnerability(vuln1.ID, FieldVulnerabilityAll)
if assert.Nil(t, err) && assert.NotNil(t, v1) {
assert.Equal(t, vuln1.ID, v1.ID)
assert.Equal(t, vuln1.Link, v1.Link)
assert.Equal(t, vuln1.Priority, v1.Priority)
assert.Equal(t, vuln1.Description, v1.Description)
if assert.Len(t, v1.FixedInNodes, 1) {
assert.Equal(t, vuln1.FixedInNodes[0], v1.FixedInNodes[0])
}
}
// Ensure that vulnerabilities with empty descriptions work as well
v3, err := FindOneVulnerability(vuln3.ID, FieldVulnerabilityAll)
if assert.Nil(t, err) && assert.NotNil(t, v3) {
assert.Equal(t, vuln3.Description, v3.Description)
}
// Find vulnerabilities by fixed packages
vulnsFixedInPkg1AndPkg3, err := FindAllVulnerabilitiesByFixedIn([]string{"pkg2", "pkg3"}, FieldVulnerabilityAll)
assert.Nil(t, err)
assert.Len(t, vulnsFixedInPkg1AndPkg3, 2)
// Delete vulnerability
if assert.Nil(t, DeleteVulnerability(vuln1.ID)) {
v1, err := FindOneVulnerability(vuln1.ID, FieldVulnerabilityAll)
assert.Equal(t, cerrors.ErrNotFound, err)
assert.Nil(t, v1)
}
}
// Update a vulnerability and verify its new content
pkg1 := &Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("1.0")}
InsertPackages([]*Package{pkg1})
vuln5 := &Vulnerability{ID: "test5", Link: "link5", Priority: types.Medium, Description: "testDescription5", FixedInNodes: []string{pkg1.Node}}
_, err = InsertVulnerabilities([]*Vulnerability{vuln5})
if assert.Nil(t, err) {
// Partial updates
// # Just a field update
vuln5b := &Vulnerability{ID: "test5", Priority: types.High}
_, err := InsertVulnerabilities([]*Vulnerability{vuln5b})
if assert.Nil(t, err) {
v5b, err := FindOneVulnerability(vuln5b.ID, FieldVulnerabilityAll)
if assert.Nil(t, err) && assert.NotNil(t, v5b) {
assert.Equal(t, vuln5b.ID, v5b.ID)
assert.Equal(t, vuln5b.Priority, v5b.Priority)
if assert.Len(t, v5b.FixedInNodes, 1) {
assert.Contains(t, v5b.FixedInNodes, pkg1.Node)
}
}
}
// # Just a field update, twice in the same transaction
vuln5b1 := &Vulnerability{ID: "test5", Link: "http://foo.bar"}
vuln5b2 := &Vulnerability{ID: "test5", Link: "http://bar.foo"}
_, err = InsertVulnerabilities([]*Vulnerability{vuln5b1, vuln5b2})
if assert.Nil(t, err) {
v5b2, err := FindOneVulnerability(vuln5b2.ID, FieldVulnerabilityAll)
if assert.Nil(t, err) && assert.NotNil(t, v5b2) {
assert.Equal(t, vuln5b2.Link, v5b2.Link)
}
}
// # All fields except fixedIn update
vuln5c := &Vulnerability{ID: "test5", Link: "link5c", Priority: types.Critical, Description: "testDescription5c"}
_, err = InsertVulnerabilities([]*Vulnerability{vuln5c})
if assert.Nil(t, err) {
v5c, err := FindOneVulnerability(vuln5c.ID, FieldVulnerabilityAll)
if assert.Nil(t, err) && assert.NotNil(t, v5c) {
assert.Equal(t, vuln5c.ID, v5c.ID)
assert.Equal(t, vuln5c.Link, v5c.Link)
assert.Equal(t, vuln5c.Priority, v5c.Priority)
assert.Equal(t, vuln5c.Description, v5c.Description)
if assert.Len(t, v5c.FixedInNodes, 1) {
assert.Contains(t, v5c.FixedInNodes, pkg1.Node)
}
}
}
// Complete update
pkg2 := &Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("1.1")}
pkg3 := &Package{OS: "testOS", Name: "testpkg2", Version: types.NewVersionUnsafe("1.0")}
InsertPackages([]*Package{pkg2, pkg3})
vuln5d := &Vulnerability{ID: "test5", Link: "link5d", Priority: types.Low, Description: "testDescription5d", FixedInNodes: []string{pkg2.Node, pkg3.Node}}
_, err = InsertVulnerabilities([]*Vulnerability{vuln5d})
if assert.Nil(t, err) {
v5d, err := FindOneVulnerability(vuln5d.ID, FieldVulnerabilityAll)
if assert.Nil(t, err) && assert.NotNil(t, v5d) {
assert.Equal(t, vuln5d.ID, v5d.ID)
assert.Equal(t, vuln5d.Link, v5d.Link)
assert.Equal(t, vuln5d.Priority, v5d.Priority)
assert.Equal(t, vuln5d.Description, v5d.Description)
// Here, we ensure that a vulnerability can only be fixed by one package of a given branch at a given time
// And that we can add new fixed packages as well
if assert.Len(t, v5d.FixedInNodes, 2) {
assert.NotContains(t, v5d.FixedInNodes, pkg1.Node)
}
}
}
}
// Create and update a vulnerability's packages (and from the same branch) in the same batch
pkg1 = &Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("1.0")}
pkg1b := &Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("1.1")}
InsertPackages([]*Package{pkg1, pkg1b})
// # A vulnerability can't be inserted if fixed by two packages of the same branch
_, err = InsertVulnerabilities([]*Vulnerability{&Vulnerability{ID: "test6", Link: "link6", Priority: types.Medium, Description: "testDescription6", FixedInNodes: []string{pkg1.Node, pkg1b.Node}}})
assert.Error(t, err)
// # Two updates of the same vulnerability in the same batch with packages of the same branch
pkg0 := &Package{OS: "testOS", Name: "testpkg0", Version: types.NewVersionUnsafe("1.0")}
InsertPackages([]*Package{pkg0})
_, err = InsertVulnerabilities([]*Vulnerability{&Vulnerability{ID: "test7", Link: "link7", Priority: types.Medium, Description: "testDescription7", FixedInNodes: []string{pkg0.Node}}})
if assert.Nil(t, err) {
vuln7b := &Vulnerability{ID: "test7", FixedInNodes: []string{pkg1.Node}}
vuln7c := &Vulnerability{ID: "test7", FixedInNodes: []string{pkg1b.Node}}
_, err = InsertVulnerabilities([]*Vulnerability{vuln7b, vuln7c})
if assert.Nil(t, err) {
v7, err := FindOneVulnerability("test7", FieldVulnerabilityAll)
if assert.Nil(t, err) && assert.Len(t, v7.FixedInNodes, 2) {
assert.Contains(t, v7.FixedInNodes, pkg0.Node)
assert.NotContains(t, v7.FixedInNodes, pkg1.Node)
assert.Contains(t, v7.FixedInNodes, pkg1b.Node)
}
// # A vulnerability can't be updated if fixed by two packages of the same branch
_, err = InsertVulnerabilities([]*Vulnerability{&Vulnerability{ID: "test7", FixedInNodes: []string{pkg1.Node, pkg1b.Node}}})
assert.Error(t, err)
}
}
}
func TestInsertVulnerabilityNotifications(t *testing.T) {
Open("memstore", "")
defer Close()
pkg1 := &Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("1.0")}
pkg1b := &Package{OS: "testOS", Name: "testpkg1", Version: types.NewVersionUnsafe("1.2")}
pkg2 := &Package{OS: "testOS", Name: "testpkg2", Version: types.NewVersionUnsafe("1.0")}
InsertPackages([]*Package{pkg1, pkg1b, pkg2})
// NewVulnerabilityNotification
vuln1 := &Vulnerability{ID: "test1", Link: "link1", Priority: types.Medium, Description: "testDescription1", FixedInNodes: []string{pkg1.Node}}
vuln2 := &Vulnerability{ID: "test2", Link: "link2", Priority: types.High, Description: "testDescription2", FixedInNodes: []string{pkg1.Node, pkg2.Node}}
vuln1b := &Vulnerability{ID: "test1", Priority: types.High, FixedInNodes: []string{"pkg3"}}
notifications, err := InsertVulnerabilities([]*Vulnerability{vuln1, vuln2, vuln1b})
if assert.Nil(t, err) {
// We should only have two NewVulnerabilityNotification notifications: one for test1 and one for test2
// We should not have a VulnerabilityPriorityIncreasedNotification or a VulnerabilityPackageChangedNotification
// for test1 because it is in the same batch
if assert.Len(t, notifications, 2) {
for _, n := range notifications {
_, ok := n.(*NewVulnerabilityNotification)
assert.True(t, ok)
}
}
}
// VulnerabilityPriorityIncreasedNotification
vuln1c := &Vulnerability{ID: "test1", Priority: types.Critical}
notifications, err = InsertVulnerabilities([]*Vulnerability{vuln1c})
if assert.Nil(t, err) {
if assert.Len(t, notifications, 1) {
if nn, ok := notifications[0].(*VulnerabilityPriorityIncreasedNotification); assert.True(t, ok) {
assert.Equal(t, vuln1b.Priority, nn.OldPriority)
assert.Equal(t, vuln1c.Priority, nn.NewPriority)
}
}
}
notifications, err = InsertVulnerabilities([]*Vulnerability{&Vulnerability{ID: "test1", Priority: types.Low}})
assert.Nil(t, err)
assert.Len(t, notifications, 0)
// VulnerabilityPackageChangedNotification
vuln1e := &Vulnerability{ID: "test1", FixedInNodes: []string{pkg1b.Node}}
vuln1f := &Vulnerability{ID: "test1", FixedInNodes: []string{pkg2.Node}}
notifications, err = InsertVulnerabilities([]*Vulnerability{vuln1e, vuln1f})
if assert.Nil(t, err) {
if assert.Len(t, notifications, 1) {
if nn, ok := notifications[0].(*VulnerabilityPackageChangedNotification); assert.True(t, ok) {
// Here, we say that pkg1b fixes the vulnerability, but as pkg1b is in
// the same branch as pkg1, pkg1 should be removed and pkg1b added
// We also add pkg2 as fixed
assert.Contains(t, nn.AddedFixedInNodes, pkg1b.Node)
assert.Contains(t, nn.RemovedFixedInNodes, pkg1.Node)
assert.Contains(t, nn.AddedFixedInNodes, pkg2.Node)
}
}
}
}

760
docs/API.md Normal file
View File

@ -0,0 +1,760 @@
# General
## Fetch API Version
It returns the versions of the API and the layer processing engine.
GET /v1/versions
* The versions are integers.
* The API version number is raised each time there is an structural change.
* The Engine version is increased when the a new layer analysis could find new
relevant data.
### Example
```
curl -s 127.0.0.1:6060/v1/versions | python -m json.tool
```
### Response
```
HTTP/1.1 200 OK
{
"APIVersion": "1",
"EngineVersion": "1"
}
```
## Fetch Health status
GET /v1/health
Returns 200 if essential services are healthy (ie. database) and 503 otherwise.
This call is also available on the API port + 1, without any security, allowing
external monitoring systems to easily access it.
### Example
```
curl -s 127.0.0.1:6060/v1/health | python -m json.tool
```
```
curl -s 127.0.0.1:6061/ | python -m json.tool
```
### Success Response
```
HTTP/1.1 200 OK
{
"database":{
"IsHealthy":true
},
"notifier":{
"IsHealthy":true,
"Details":{
"QueueSize":0
}
},
"updater":{
"IsHealthy":true,
"Details":{
"HealthIdentifier":"cf65a8f6-425c-4a9c-87fe-f59ddf75fc87",
"HealthLockOwner":"1e7fce65-ee67-4ca5-b2e9-61e9f5e0d3ed",
"LatestSuccessfulUpdate":"2015-09-30T14:47:47Z",
"ConsecutiveLocalFailures":0
}
}
}
```
### Error Response
```
HTTP/1.1 503 Service unavailable
{
"database":{
"IsHealthy":false
},
"notifier":{
"IsHealthy":true,
"Details":{
"QueueSize":0
}
},
"updater":{
"IsHealthy":true,
"Details":{
"HealthIdentifier":"cf65a8f6-425c-4a9c-87fe-f59ddf75fc87",
"HealthLockOwner":"1e7fce65-ee67-4ca5-b2e9-61e9f5e0d3ed",
"LatestSuccessfulUpdate":"2015-09-30T14:47:47Z",
"ConsecutiveLocalFailures":0
}
}
}
```
# Layers
## Insert a new Layer
It processes and inserts a new Layer in the database.
POST /v1/layers
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Layer|
|Path|String|Absolute path or HTTP link pointing to the Layer's tar file|
|ParentID|String|(Optionnal) Unique ID of the Layer's parent
If the Layer has not parent, the ParentID field should be omitted or empty.
### Example
```
curl -s -H "Content-Type: application/json" -X POST -d \
'{
"ID": "39bb80489af75406073b5364c9c326134015140e1f7976a370a8bd446889e6f8",
"Path": "https://layers_storage/39bb80489af75406073b5364c9c326134015140e1f7976a370a8bd446889e6f8.tar",
"ParentID": "df2a0347c9d081fa05ecb83669dcae5830c67b0676a6d6358218e55d8a45969c"
}' \
127.0.0.1:6060/v1/layers
```
### Success Response
If the layer has been successfully processed, the version of the engine which processed it is returned.
```
HTTP/1.1 201 Created
{
"Version": "1"
}
```
### Error Response
```
HTTP/1.1 400 Bad Request
{
"Message": "Layer 39bb80489af75406073b5364c9c326134015140e1f7976a370a8bd446889e6f8's parent (df2a0347c9d081fa05ecb83669dcae5830c67b0676a6d6358218e55d8a45969c) is unknown."
}
```
It could also return a `415 Unsupported Media Type` response with a `Message` if the request content is not valid JSON.
## Get a Layer's operating system
It returns the operating system a given Layer.
GET /v1/layers/{ID}/os
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Layer|
### Example
curl -s 127.0.0.1:6060/v1/layers/39bb80489af75406073b5364c9c326134015140e1f7976a370a8bd446889e6f8/os | python -m json.tool
### Success Response
```
HTTP/1.1 200 OK
{
"OS": "debian:8",
}
```
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message": "the resource cannot be found"
}
```
## Get a Layer's parent
It returns the parent's ID of a given Layer.
It returns an empty ID string when the layer has no parent.
GET /v1/layers/{ID}/parent
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Layer|
### Example
curl -s 127.0.0.1:6060/v1/layers/39bb80489af75406073b5364c9c326134015140e1f7976a370a8bd446889e6f8/parent | python -m json.tool
### Success Response
```
HTTP/1.1 200 OK
{
"ID": "df2a0347c9d081fa05ecb83669dcae5830c67b0676a6d6358218e55d8a45969c",
}
```
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message": "the resource cannot be found"
}
```
## Get a Layer's package list
It returns the package list of a given Layer.
GET /v1/layers/{ID}/packages
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Layer|
### Example
curl -s 127.0.0.1:6060/v1/layers/39bb80489af75406073b5364c9c326134015140e1f7976a370a8bd446889e6f8/packages | python -m json.tool
### Success Response
```
HTTP/1.1 200 OK
{
"Packages": [
{
"Name": "gcc-4.9",
"OS": "debian:8",
"Version": "4.9.2-10"
},
[...]
]
}
```
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message": "the resource cannot be found"
}
```
## Get a Layer's package diff
It returns the lists of packages a given Layer installs and removes.
GET /v1/layers/{ID}/packages/diff
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Layer|
### Example
curl -s 127.0.0.1:6060/v1/layers/39bb80489af75406073b5364c9c326134015140e1f7976a370a8bd446889e6f8/packages/diff | python -m json.tool
### Success Response
```
HTTP/1.1 200 OK
{
"InstalledPackages": [
{
"Name": "gcc-4.9",
"OS": "debian:8",
"Version": "4.9.2-10"
},
[...]
],
"RemovedPackages": null
}
```
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message": "the resource cannot be found"
}
```
## Get a Layer's vulnerabilities
It returns the lists of vulnerabilities which affect a given Layer.
GET /v1/layers/{ID}/vulnerabilities(?minimumPriority=Low)
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Layer|
|minimumPriority|Priority|(Optionnal) The minimum priority of the returned vulnerabilities. Defaults to High|
### Example
curl -s "127.0.0.1:6060/v1/layers/39bb80489af75406073b5364c9c326134015140e1f7976a370a8bd446889e6f8/vulnerabilities?minimumPriority=Negligible" | python -m json.tool
### Success Response
```
HTTP/1.1 200 OK
{
"Vulnerabilities": [
{
"ID": "CVE-2014-2583",
"Link": "http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-2583",
"Priority": "Low",
"Description": "Multiple directory traversal vulnerabilities in pam_timestamp.c in the pam_timestamp module for Linux-PAM (aka pam) 1.1.8 allow local users to create aribitrary files or possibly bypass authentication via a .. (dot dot) in the (1) PAM_RUSER value to the get_ruser function or (2) PAM_TTY value to the check_tty funtion, which is used by the format_timestamp_name function."
},
[...]
}
```
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message": "the resource cannot be found"
}
```
## Get vulnerabilities that a layer introduces and removes
It returns the lists of vulnerabilities which are introduced and removed by the given Layer.
GET /v1/layers/{ID}/vulnerabilities/diff(?minimumPriority=Low)
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Layer|
|minimumPriority|Priority|(Optionnal) The minimum priority of the returned vulnerabilities|
### Example
curl -s "127.0.0.1:6060/v1/layers/39bb80489af75406073b5364c9c326134015140e1f7976a370a8bd446889e6f8/vulnerabilities?minimumPriority=Negligible" | python -m json.tool
### Success Response
```
HTTP/1.1 200 OK
{
"Adds": [
{
"ID": "CVE-2014-2583",
"Link": "http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-2583",
"Priority": "Low",
"Description": "Multiple directory traversal vulnerabilities in pam_timestamp.c in the pam_timestamp module for Linux-PAM (aka pam) 1.1.8 allow local users to create aribitrary files or possibly bypass authentication via a .. (dot dot) in the (1) PAM_RUSER value to the get_ruser function or (2) PAM_TTY value to the check_tty funtion, which is used by the format_timestamp_name function."
},
[...]
],
"Removes": null
}
```
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message": "the resource cannot be found"
}
```
## Get a Layers' vulnerabilities (Batch)
It returns the lists of vulnerabilities which affect the given Layers.
POST /v1/batch/layers/vulnerabilities(?minimumPriority=Low)
Counterintuitively, this request is actually a POST to be able to pass a lot of parameters.
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|LayersIDs|Array of strings|Unique IDs of Layers|
|minimumPriority|Priority|(Optionnal) The minimum priority of the returned vulnerabilities. Defaults to High|
### Example
```
curl -s -H "Content-Type: application/json" -X POST -d \
'{
"LayersIDs": [
"a005304e4e74c1541988d3d1abb170e338c1d45daee7151f8e82f8460634d329",
"f1b10cd842498c23d206ee0cbeaa9de8d2ae09ff3c7af2723a9e337a6965d639"
]
}' \
127.0.0.1:6060/v1/batch/layers/vulnerabilities
```
### Success Response
```
HTTP/1.1 200 OK
{
"a005304e4e74c1541988d3d1abb170e338c1d45daee7151f8e82f8460634d329": {
"Vulnerabilities": [
{
"ID": "CVE-2014-2583",
"Link": "http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-2583",
"Priority": "Low",
"Description": "Multiple directory traversal vulnerabilities in pam_timestamp.c in the pam_timestamp module for Linux-PAM (aka pam) 1.1.8 allow local users to create aribitrary files or possibly bypass authentication via a .. (dot dot) in the (1) PAM_RUSER value to the get_ruser function or (2) PAM_TTY value to the check_tty funtion, which is used by the format_timestamp_name function."
},
[...]
]
},
[...]
}
```
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message": "the resource cannot be found"
}
```
# Vulnerabilities
## Get a vulnerability's informations
It returns all known informations about a Vulnerability and its fixes.
GET /v1/vulnerabilities/{ID}
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Vulnerability|
### Example
curl -s 127.0.0.1:6060/v1/vulnerabilities/CVE-2015-0235 | python -m json.tool
### Success Response
```
HTTP/1.1 200 OK
{
"ID": "CVE-2015-0235",
"Link": "https://security-tracker.debian.org/tracker/CVE-2015-0235",
"Priority": "High",
"Description": "Heap-based buffer overflow in the __nss_hostname_digits_dots function in glibc 2.2, and other 2.x versions before 2.18, allows context-dependent attackers to execute arbitrary code via vectors related to the (1) gethostbyname or (2) gethostbyname2 function, aka \"GHOST.\"",
"AffectedPackages": [
{
"Name": "eglibc",
"OS": "debian:7",
"AllVersions": false,
"BeforeVersion": "2.13-38+deb7u7"
},
{
"Name": "glibc",
"OS": "debian:8",
"AllVersions": false,
"BeforeVersion": "2.18-1"
},
{
"Name": "glibc",
"OS": "debian:9",
"AllVersions": false,
"BeforeVersion": "2.18-1"
},
{
"Name": "glibc",
"OS": "debian:unstable",
"AllVersions": false,
"BeforeVersion": "2.18-1"
},
{
"Name": "eglibc",
"OS": "debian:6",
"AllVersions": true,
"BeforeVersion": "",
}
],
}
```
The `AffectedPackages` array represents the list of affected packages and provides the first known versions in which the Vulnerability has been fixed - each previous versions may be vulnerable. If `AllVersions` is equal to `true`, no fix exists, thus, all versions may be vulnerable.
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message":"the resource cannot be found"
}
```
## Insert a new Vulnerability
It manually inserts a new Vulnerability.
POST /v1/vulnerabilities
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Vulnerability|
|Link|String|Link to the Vulnerability tracker|
|Priority|Priority|Priority of the Vulnerability|
|AffectedPackages|Array of Package|Affected packages (Name, OS) and fixed version (or all versions)|
If no fix exists for a package, `AllVersions` should be set to `true`.
Valid Priorities are based on [Ubuntu CVE Tracker/README](http://bazaar.launchpad.net/~ubuntu-security/ubuntu-cve-tracker/master/view/head:/README)
* **Unknown** is either a security problem that has not been ssigned to a priority yet or a priority that our system did not recognize
* **Negligible** is technically a security problem, but is only theoretical in nature, requires a very special situation, has almost no install base, or does no real damage. These tend not to get backport from upstreams, and will likely not be included in security updates unless there is an easy fix and some other issue causes an update.
* **Low** is a security problem, but is hard to exploit due to environment, requires a user-assisted attack, a small install base, or does very little damage. These tend to be included in security updates only when higher priority issues require an update, or if many low priority issues have built up.
* **Medium** is a real security problem, and is exploitable for many people. Includes network daemon denial of service attacks, cross-site scripting, and gaining user privileges. Updates should be made soon for this priority of issue.
* **High** is a real problem, exploitable for many people in a default installation. Includes serious remote denial of services, local root privilege escalations, or data loss.
* **Critical** is a world-burning problem, exploitable for nearly all people in a default installation of Ubuntu. Includes remote root privilege escalations, or massive data loss.
* **Defcon1** is a **Critical** problem which has been manually highlighted by the team. It requires an immediate attention.
### Example
```
curl -s -H "Content-Type: application/json" -X POST -d \
'{
"ID": "CVE-2015-0235",
"Link": "https:security-tracker.debian.org/tracker/CVE-2015-0235",
"Priority": "High",
"Description": "Heap-based buffer overflow in the __nss_hostname_digits_dots function in glibc 2.2, and other 2.x versions before 2.18, allows context-dependent attackers to execute arbitrary code via vectors related to the (1) gethostbyname or (2) gethostbyname2 function, aka \"GHOST.\"",
"AffectedPackages": [
{
"Name": "eglibc",
"OS": "debian:7",
"BeforeVersion": "2.13-38+deb7u7"
},
{
"Name": "glibc",
"OS": "debian:8",
"BeforeVersion": "2.18-1"
},
{
"Name": "glibc",
"OS": "debian:9",
"BeforeVersion": "2.18-1"
},
{
"Name": "glibc",
"OS": "debian:unstable",
"BeforeVersion": "2.18-1"
},
{
"Name": "eglibc",
"OS": "debian:6",
"AllVersions": true,
"BeforeVersion": ""
}
]
}' \
127.0.0.1:6060/v1/vulnerabilities
```
### Success Response
HTTP/1.1 201 Created
### Error Response
```
HTTP/1.1 400 Bad Request
{
"Message":"Could not insert a vulnerability which has an invalid priority"
}
```
It could also return a `415 Unsupported Media Type` response with a `Message` if the request content is not valid JSON.
## Update a Vulnerability
It updates an existing Vulnerability.
PUT /v1/vulnerabilities/{ID}
The Link, Priority and Description fields can be updated. FixedIn packages are added to the vulnerability. However, as a vulnerability can be fixed by only one package on a given branch (OS, Name): old FixedIn packages, which belong to the same branch as a new added one, will be removed.
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|Link|String|Link to the Vulnerability tracker|
|Priority|Priority|Priority of the Vulnerability|
|FixedIn|Array of Package|Affected packages (Name, OS) and fixed version (or all versions)|
If no fix exists for a package, `AllVersions` should be set to `true`.
### Example
curl -s -H "Content-Type: application/json" -X PUT -d '{"Priority": "Critical" }' 127.0.0.1:6060/v1/vulnerabilities/CVE-2015-0235
### Success Response
```
HTTP/1.1 204 No content
```
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message":"the resource cannot be found"
}
```
It could also return a `415 Unsupported Media Type` response with a `Message` if the request content is not valid JSON.
## Delete a Vulnerability
It deletes an existing Vulnerability.
DEL /v1/vulnerabilities/{ID}
Be aware that it does not prevent fetcher's to re-create it. Therefore it is only useful to remove manually inserted vulnerabilities.
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Vulnerability|
### Example
curl -s -X DEL 127.0.0.1:6060/v1/vulnerabilities/CVE-2015-0235
### Success Response
```
HTTP/1.1 204 No content
```
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message":"the resource cannot be found"
}
```
## Get layers introducing a vulnerability
It gets all the layers (their IDs) that introduce the given vulnerability.
GET /v1/vulnerabilities/:id/introducing-layers
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Vulnerability|
### Example
curl -s -X GET 127.0.0.1:6060/v1/vulnerabilities/CVE-2015-0235/introducing-layers
### Success Response
```
HTTP/1.1 200
{
"IntroducingLayers":[
"fb9cc58bde0c0a8fe53e6fdd23898e45041783f2d7869d939d7364f5777fde6f"
]
}
```
### Error Response
```
HTTP/1.1 404 Not Found
{
"Message":"the resource cannot be found"
}
```
## Get layers affected by a vulnerability
It returns whether the specified Layers are vulnerable to the given Vulnerability or not.
POST /v1/vulnerabilities/{ID}/affected-layers
Counterintuitively, this request is actually a POST to be able to pass a lot of parameters.
### Parameters
|Name|Type|Description|
|------|-----|-------------|
|ID|String|Unique ID of the Vulnerability|
|LayersIDs|Array of strings|Unique IDs of Layers|
### Example
```
curl -s -H "Content-Type: application/json" -X POST -d \
'{
"LayersIDs": [
"a005304e4e74c1541988d3d1abb170e338c1d45daee7151f8e82f8460634d329",
"f1b10cd842498c23d206ee0cbeaa9de8d2ae09ff3c7af2723a9e337a6965d639"
]
}' \
127.0.0.1:6060/v1/vulnerabilities/CVE-2015-0235/affected-layers
```
### Success Response
```
HTTP/1.1 200 OK
{
"f1b10cd842498c23d206ee0cbeaa9de8d2ae09ff3c7af2723a9e337a6965d639": {
"Vulnerable": false
},
"fb9cc58bde0c0a8fe53e6fdd23898e45041783f2d7869d939d7364f5777fde6f": {
"Vulnerable": true
}
}
```
### Error Response
Returned when the Layer or the Vulnerability does not exist.
```
HTTP/1.1 404 Not Found
{
"Message": "the resource cannot be found"
}
```

BIN
docs/Model.graffle Normal file

Binary file not shown.

70
docs/Model.md Normal file
View File

@ -0,0 +1,70 @@
# Legend
-> outbound edges
<- inbound edges
# Layer
Key: "layer:" + Hash(id)
-> is = "layer"
-> id
-> parent (my ancestor is)
-> os
-> adds*
-> removes*
-> engineVersion
<- parent* (is ancestor of)
# Package
Key: "package:" + Hash(os + ":" + name + ":" + version)
-> is = "package"
-> os
-> name
-> version
-> nextVersion
<- nextVersion
<- adds*
<- removes*
<- fixed_in*
Packages are organized in linked lists : there is one linked list for one os/name couple. Each linked list has a tail and a head with special versions.
# Vulnerability
Key: "vulnerability:" + Hash(name)
-> is = "vulnerability"
-> name
-> priority
-> link
-> fixed_in*
# Notification
Key: "notification:" + random uuid
-> is = "notification"
-> type
-> data
-> isSent
# Flag
Key: "flag:" + name
-> value
# Lock
Key: name
-> locked = "locked"
-> locked_until (timestamp)
-> locked_by
A lock can be used to lock a specific graph node by using the node Key as the lock name.

BIN
docs/Model.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

131
docs/Notifications.md Normal file
View File

@ -0,0 +1,131 @@
# Notifications
This tool can send notifications to external services when specific events happen, such as vulnerability updates.
For now, it only supports transmitting them to an HTTP endpoint using POST requests, but it may be extended quite easily.
To enable the notification system, specify the following command-line arguments:
--notifier-type=http --notifier-http-url="http://your-notification-endpoint"
# Types of notifications
## A new vulnerability has been released
A notification of this kind is sent as soon as a new vulnerability is added in the system, via the updater or the API.
### Example
```
{
"Name":"CVE-2016-0001",
"Type":"NewVulnerabilityNotification",
"Content":{
"Vulnerability":{
"ID":"CVE-2016-0001",
"Link":"https:security-tracker.debian.org/tracker/CVE-2016-0001",
"Priority":"Medium",
"Description":"A futurist vulnerability",
"AffectedPackages":[
{
"OS":"centos:6",
"Name":"bash",
"AllVersions":true,
"BeforeVersion":""
}
]
},
"IntroducingLayersIDs":[
"fb9cc58bde0c0a8fe53e6fdd23898e45041783f2d7869d939d7364f5777fde6f"
]
}
}
```
The `IntroducingLayersIDs` array contains every layers that install at least one affected package.
## A vulnerability's priority has increased
This notification is sent when a vulnerability's priority has increased.
### Example
```
{
"Name":"CVE-2016-0001",
"Type":"VulnerabilityPriorityIncreasedNotification",
"Content":{
"Vulnerability":{
"ID":"CVE-2016-0001",
"Link":"https:security-tracker.debian.org/tracker/CVE-2016-0001",
"Priority":"Critical",
"Description":"A futurist vulnerability",
"AffectedPackages":[
{
"OS":"centos:6",
"Name":"bash",
"AllVersions":true,
"BeforeVersion":""
}
]
},
"OldPriority":"Medium",
"NewPriority":"Critical",
"IntroducingLayersIDs":[
"fb9cc58bde0c0a8fe53e6fdd23898e45041783f2d7869d939d7364f5777fde6f"
]
}
}
```
The `IntroducingLayersIDs` array contains every layers that install at least one affected package.
## A vulnerability's affected package list changed
This notification is sent when the affected packages of a vulnerability changes.
### Example
```
{
"Name":"CVE-2016-0001",
"Type":"VulnerabilityPackageChangedNotification",
"Content":{
"Vulnerability":{
"ID":"CVE-2016-0001",
"Link":"https:security-tracker.debian.org/tracker/CVE-2016-0001",
"Priority":"Critical",
"Description":"A futurist vulnerability",
"AffectedPackages":[
{
"OS":"centos:6",
"Name":"bash",
"AllVersions":false,
"BeforeVersion":"4.0"
}
]
},
"AddedAffectedPackages":[
{
"OS":"centos:6",
"Name":"bash",
"AllVersions":false,
"BeforeVersion":"4.0"
}
],
"RemovedAffectedPackages":[
{
"OS":"centos:6",
"Name":"bash",
"AllVersions":true,
"BeforeVersion":""
}
],
"NewIntroducingLayersIDs": [],
"FormerIntroducingLayerIDs":[
"fb9cc58bde0c0a8fe53e6fdd23898e45041783f2d7869d939d7364f5777fde6f",
]
}
}
```
The `NewIntroducingLayersIDs` array contains the layers that install at least one of the newly affected package, and thus which are now vulnerable because of this change. In the other hand, the `FormerIntroducingLayerIDs` array contains the layers that are not introducing the vulnerability anymore.

50
docs/Run.md Normal file
View File

@ -0,0 +1,50 @@
# Build and Run with Docker
The easiest way to run this tool is to deploy it using Docker.
If you prefer to run it locally, reading the Dockerfile will tell you how.
To deploy it from the latest sources, follow this procedure:
* Clone the repository and change your current directory
* Build the container: `docker build -t <TAG> .`
* Run it like this to see the available commands: `docker run -it <TAG>`. To get help about a specific command, use `docker run -it <TAG> help <COMMAND>`
## Command-Line examples
When running multiple instances is not desired, using BoltDB backend is the best choice as it is lightning fast:
docker run -it <TAG> --db-type=bolt --db-path=/db/database
Using PostgreSQL enables running multiple instances concurrently. Here is a command line example:
docker run -it <TAG> --db-type=sql --db-path='host=awesome-database.us-east-1.rds.amazonaws.com port=5432 user=SuperSheep password=SuperSecret' --update-interval=2h --notifier-type=http --notifier-http-url="http://your-notification-endpoint"
The default API port is 6060, read the [API Documentation](API.md) to learn more.
# Build and Run at scale with AWS
CloudFormation templates are available under `cloudformation/` folder. They help deploying the tool in an auto-scaling group behind a load-balancer.
All *.yaml* files are [Jinja2](http://jinja.pocoo.org) templates.
Firstly, you need:
* A publicly accessible PostgreSQL RDS instance
* A HTTP endpoint ready for the notifier if you plan to have notifications
* A signed key pair and the CA certificate if you want to tool to run securely (see [Security.md](Security.md))
* The `cloudformation/` folder and the Python virtual environment: `virtualenv .venv && source .venv/bin/activate && pip install -r requirements.txt`
## Create a new ELB
* Extend or modify the `cloudformation/templates/lb.yaml` to fit your needs
* The `alarm_actions()` macro which defines actions to be taken by the CloudWatch alarm on the ELB
* Deploy the load balancer with: `python generate_stack.py <YAML_FILE> <AWS_REGION> <AWS_CLOUDFORMATION_BUCKET> <AWS_ACCESS_KEY> <AWS_SECRET_KEY> --upload <STACK_FRIENDLY_NAME>`
* Create a new AWS Route53 A Record alias to the newly create ELB
* Wait until the DNS record is propagated
## Deploy the app
* Extend or modify `cloudformation/templates/app.yaml` to fit your needs
* Command-line arguments are to be defined in `app_arguments` variable, such as RDS database informations, the notifier endpoint and the keys file paths (which are automatically written in `/etc/certs/quay-sec.crt`, `/etc/certs/quay-sec.key` and `/etc/certs/ca.crt` by the macros below)
* The `elb_names()` macro to specify the names of the load balancers
* The `logentries_token` if you want to aggregate the logs on LogEntries
* The `ssh_key_name` variable and the `ssh_public_keys` macro for the main and secondary SSH public keys
* The `app_public_key`, `app_private_key`, `app_ca` macros for respectively
* Deploy the stack with: `python generate_stack.py <YAML_FILE> <AWS_REGION> <AWS_CLOUDFORMATION_BUCKET> <AWS_ACCESS_KEY> <AWS_SECRET_KEY> --upload <STACK_FRIENDLY_NAME> --image_tag <TAG>` in which `TAG` is an available tag on the [Quay.io repository](https://quay.io/repository/coreos/quay-sec), such as `latest`
* Wait until the instances appear as healthy in the Load Balancer
* Delete the old stack if there is one

54
docs/Security.md Normal file
View File

@ -0,0 +1,54 @@
# Security
# Enabling HTTPS
HTTPS provides clients the ability to verify the server identity and provide transport security.
For this you need your CA certificate (ca.crt) and signed key pair (server.crt, server.key) ready.
To enable it, provide signed key pair using `--api-cert-file` and `--api-key-file` arguments.
To test it, you want to use curl like this:
curl --cacert ca.crt -L https://127.0.0.1:6060/v1/versions
You should be able to see the handshake succeed. Because we use self-signed certificates with our own certificate authorities you need to provide the CA to curl using the --cacert option. Another possibility would be to add your CA certificate to the trusted certificates on your system (usually in /etc/ssl/certs).
**OSX 10.9+ Users**: curl 7.30.0 on OSX 10.9+ doesn't understand certificates passed in on the command line. Instead you must import the dummy ca.crt directly into the keychain or add the -k flag to curl to ignore errors. If you want to test without the -k flag run open ca.crt and follow the prompts. Please remove this certificate after you are done testing!
# Enabling Client Certificate Auth
We can also use client certificates to prevent unauthorized access to the API.
The clients will provide their certificates to the server and the server will check whether the cert is signed by the supplied CA and decide whether to serve the request.
You need the same files mentioned in the HTTPS section, as well as a key pair for the client (client.crt, client.key) signed by the same certificate authority. To enable it, use the same arguments as above for HTTPS and the additional `--api-ca-file` parameter with the CA certificate.
The test command from the HTTPS section should be rejected, instead we need to provide the client key pair:
curl --cacert ca.crt --cert client.crt --key client.key -L https://127.0.0.1:6060/v1/versions
**OSX 10.10+ Users**: A bundle in P12 (PKCS#12) format must be used. To convert your key pair, the following command should be used, in which the password is mandatory. Then, `--cert client.p12` along with `--password pass` replace `--cert client.crt --key client.key`. You may also import the P12 certificate into your Keychain and specify its name as it appears in the Keychain instead of the path to the file.
openssl pkcs12 -export -in client.crt -inkey client1.key -out certs/client.p12 -password pass:pass
# Generating self-signed certificates
[etcd-ca](https://github.com/coreos/etcd-ca) is a great tool when it comes to easily generate certificates. Below is an example to generate a new CA, server and client key pairs, inspired by their example.
```
git clone https://github.com/coreos/etcd-ca
cd etcd-ca
./build
# Create CA
./bin/etcd-ca init
./bin/etcd-ca export | tar xvf -
# Create certificate for server
./bin/etcd-ca new-cert --passphrase $passphrase --ip $server1ip --domain $server1hostname server1
./bin/etcd-ca sign --passphrase $passphrase server1
./bin/etcd-ca export --insecure --passphrase $passphrase server1 | tar xvf -
# Create certificate for client
./bin/etcd-ca new-cert --passphrase $passphrase client1
./bin/etcd-ca sign --passphrase $passphrase client1
./bin/etcd-ca export --insecure --passphrase $passphrase client1 | tar xvf -
```

80
health/health.go Normal file
View File

@ -0,0 +1,80 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package health defines a standard healthcheck response format and expose
// a function that summarizes registered healthchecks.
package health
import (
"fmt"
"sync"
)
// Status defines a way to know the health status of a service
type Status struct {
// IsEssential determines if the service is essential to the app, which can't
// run in case of a failure
IsEssential bool
// IsHealthy defines whether the service is working or not
IsHealthy bool
// Details gives informations specific to the service
Details interface{}
}
// A Healthchecker function is a method returning the Status of the tested service
type Healthchecker func() Status
var (
healthcheckersLock sync.Mutex
healthcheckers = make(map[string]Healthchecker)
)
// RegisterHealthchecker registers a Healthchecker function which will be part of Healthchecks
func RegisterHealthchecker(name string, f Healthchecker) {
if name == "" {
panic("Could not register a Healthchecker with an empty name")
}
if f == nil {
panic("Could not register a nil Healthchecker")
}
healthcheckersLock.Lock()
defer healthcheckersLock.Unlock()
if _, alreadyExists := healthcheckers[name]; alreadyExists {
panic(fmt.Sprintf("Healthchecker '%s' is already registered", name))
}
healthcheckers[name] = f
}
// Healthcheck calls every registered Healthchecker and summarize their output
func Healthcheck() (bool, map[string]interface{}) {
globalHealth := true
statuses := make(map[string]interface{})
for serviceName, serviceChecker := range healthcheckers {
status := serviceChecker()
globalHealth = globalHealth && (!status.IsEssential || status.IsHealthy)
statuses[serviceName] = struct {
IsHealthy bool
Details interface{} `json:",omitempty"`
}{
IsHealthy: status.IsHealthy,
Details: status.Details,
}
}
return globalHealth, statuses
}

148
main.go Normal file
View File

@ -0,0 +1,148 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"math/rand"
"os"
"os/signal"
"runtime/pprof"
"strings"
"time"
"github.com/coreos/quay-sec/api"
"github.com/coreos/quay-sec/database"
"github.com/coreos/quay-sec/notifier"
"github.com/coreos/quay-sec/updater"
"github.com/coreos/quay-sec/utils"
"github.com/coreos/pkg/capnslog"
"gopkg.in/alecthomas/kingpin.v2"
// Register components
_ "github.com/coreos/quay-sec/updater/fetchers"
_ "github.com/coreos/quay-sec/worker/detectors/os"
_ "github.com/coreos/quay-sec/worker/detectors/packages"
)
var (
log = capnslog.NewPackageLogger("github.com/coreos/quay-sec", "main")
// Database configuration
cfgDbType = kingpin.Flag("db-type", "Type of the database to use").Default("bolt").Enum("bolt", "leveldb", "memstore", "mongo", "sql")
cfgDbPath = kingpin.Flag("db-path", "Path to the database to use").String()
// Notifier configuration
cfgNotifierType = kingpin.Flag("notifier-type", "Type of the notifier to use").Default("none").Enum("none", "http")
cfgNotifierHTTPURL = kingpin.Flag("notifier-http-url", "URL that will receive POST notifications").String()
// Updater configuration
cfgUpdateInterval = kingpin.Flag("update-interval", "Frequency at which the vulnerability updater will run. Use 0 to disable the updater entirely.").Default("1h").Duration()
// API configuration
cfgAPIPort = kingpin.Flag("api-port", "Port on which the API will listen").Default("6060").Int()
cfgAPITimeout = kingpin.Flag("api-timeout", "Timeout of API calls").Default("900s").Duration()
cfgAPICertFile = kingpin.Flag("api-cert-file", "Path to TLS Cert file").ExistingFile()
cfgAPIKeyFile = kingpin.Flag("api-key-file", "Path to TLS Key file").ExistingFile()
cfgAPICAFile = kingpin.Flag("api-ca-file", "Path to CA for verifying TLS client certs").ExistingFile()
// Other flags
cfgCPUProfilePath = kingpin.Flag("cpu-profile-path", "Path to a write CPU profiling data").String()
cfgLogLevel = kingpin.Flag("log-level", "How much console-spam do you want globally").Default("info").Enum("trace", "debug", "info", "notice", "warning", "error", "critical")
)
func main() {
rand.Seed(time.Now().UTC().UnixNano())
var err error
st := utils.NewStopper()
// Parse command-line arguments
kingpin.Parse()
if *cfgDbType != "memstore" && *cfgDbPath == "" {
kingpin.Errorf("required flag --db-path not provided, try --help")
os.Exit(1)
}
if *cfgNotifierType == "http" && *cfgNotifierHTTPURL == "" {
kingpin.Errorf("required flag --notifier-http-url not provided, try --help")
os.Exit(1)
}
// Initialize error/logging system
logLevel, err := capnslog.ParseLevel(strings.ToUpper(*cfgLogLevel))
capnslog.SetGlobalLogLevel(logLevel)
capnslog.SetFormatter(capnslog.NewPrettyFormatter(os.Stdout, false))
// Enable CPU Profiling if specified
if *cfgCPUProfilePath != "" {
f, err := os.Create(*cfgCPUProfilePath)
if err != nil {
log.Fatalf("failed to create profile file: %s", err)
}
defer f.Close()
pprof.StartCPUProfile(f)
log.Info("started profiling")
defer func() {
pprof.StopCPUProfile()
log.Info("stopped profiling")
}()
}
// Open database
err = database.Open(*cfgDbType, *cfgDbPath)
if err != nil {
log.Fatal(err)
}
defer database.Close()
// Start notifier
var notifierService notifier.Notifier
switch *cfgNotifierType {
case "http":
notifierService, err = notifier.NewHTTPNotifier(*cfgNotifierHTTPURL)
if err != nil {
log.Fatalf("could not initialize HTTP notifier: %s", err)
}
}
if notifierService != nil {
st.Begin()
go notifierService.Run(st)
}
// Start Main API and Health API
st.Begin()
go api.RunMain(&api.Config{
Port: *cfgAPIPort,
TimeOut: *cfgAPITimeout,
CertFile: *cfgAPICertFile,
KeyFile: *cfgAPIKeyFile,
CAFile: *cfgAPICAFile,
}, st)
st.Begin()
go api.RunHealth(*cfgAPIPort+1, st)
// Start updater
st.Begin()
go updater.Run(*cfgUpdateInterval, st)
// This blocks the main goroutine which is required to keep all the other goroutines running
interrupts := make(chan os.Signal, 1)
signal.Notify(interrupts, os.Interrupt)
<-interrupts
log.Info("Received interruption, gracefully stopping ...")
st.Stop()
}

173
notifier/notifier.go Normal file
View File

@ -0,0 +1,173 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package notifier fetches notifications from the database and sends them
// to the specified remote handler.
package notifier
import (
"bytes"
"encoding/json"
"net/http"
"net/url"
"time"
"github.com/coreos/pkg/capnslog"
"github.com/coreos/pkg/timeutil"
"github.com/coreos/quay-sec/database"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/coreos/quay-sec/health"
"github.com/coreos/quay-sec/utils"
"github.com/pborman/uuid"
)
// A Notifier dispatches notifications
type Notifier interface {
Run(*utils.Stopper)
}
var log = capnslog.NewPackageLogger("github.com/coreos/quay-sec", "notifier")
const (
maxBackOff = 5 * time.Minute
checkInterval = 5 * time.Second
refreshLockAnticipation = time.Minute * 2
lockDuration = time.Minute*8 + refreshLockAnticipation
)
// A HTTPNotifier dispatches notifications to an HTTP endpoint with POST requests
type HTTPNotifier struct {
url string
}
// NewHTTPNotifier initializes a new HTTPNotifier
func NewHTTPNotifier(URL string) (*HTTPNotifier, error) {
if _, err := url.Parse(URL); err != nil {
return nil, cerrors.NewBadRequestError("could not create a notifier with an invalid URL")
}
notifier := &HTTPNotifier{url: URL}
health.RegisterHealthchecker("notifier", notifier.Healthcheck)
return notifier, nil
}
// Run pops notifications from the database, lock them, send them, mark them as
// send and unlock them
//
// It uses an exponential backoff when POST requests fail
func (notifier *HTTPNotifier) Run(st *utils.Stopper) {
defer st.End()
whoAmI := uuid.New()
log.Infof("HTTP notifier started. URL: %s. Lock Identifier: %s", notifier.url, whoAmI)
for {
node, notification, err := database.FindOneNotificationToSend(database.GetDefaultNotificationWrapper())
if notification == nil || err != nil {
if err != nil {
log.Warningf("could not get notification to send: %s.", err)
}
if !st.Sleep(checkInterval) {
break
}
continue
}
// Try to lock the notification
hasLock, hasLockUntil := database.Lock(node, lockDuration, whoAmI)
if !hasLock {
continue
}
for backOff := time.Duration(0); ; backOff = timeutil.ExpBackoff(backOff, maxBackOff) {
// Backoff, it happens when an error occurs during the communication
// with the notification endpoint
if backOff > 0 {
// Renew lock before going to sleep if necessary
if time.Now().Add(backOff).After(hasLockUntil.Add(-refreshLockAnticipation)) {
hasLock, hasLockUntil = database.Lock(node, lockDuration, whoAmI)
if !hasLock {
log.Warning("lost lock ownership, aborting")
break
}
}
// Sleep
if !st.Sleep(backOff) {
return
}
}
// Get notification content
content, err := notification.GetContent()
if err != nil {
log.Warningf("could not get content of notification '%s': %s", notification.GetName(), err.Error())
break
}
// Marshal the notification content
jsonContent, err := json.Marshal(struct {
Name, Type string
Content interface{}
}{
Name: notification.GetName(),
Type: notification.GetType(),
Content: content,
})
if err != nil {
log.Errorf("could not marshal content of notification '%s': %s", notification.GetName(), err.Error())
break
}
// Post notification
req, _ := http.NewRequest("POST", notifier.url, bytes.NewBuffer(jsonContent))
req.Header.Set("Content-Type", "application/json")
client := &http.Client{}
res, err := client.Do(req)
if err != nil {
log.Warningf("could not post notification '%s': %s", notification.GetName(), err.Error())
continue
}
res.Body.Close()
if res.StatusCode != 200 && res.StatusCode != 201 {
log.Warningf("could not post notification '%s': got status code %d", notification.GetName(), res.StatusCode)
continue
}
// Mark the notification as sent
database.MarkNotificationAsSent(node)
log.Infof("sent notification '%s' successfully", notification.GetName())
break
}
if hasLock {
database.Unlock(node, whoAmI)
}
}
log.Info("HTTP notifier stopped")
}
// Healthcheck returns the health of the notifier service
func (notifier *HTTPNotifier) Healthcheck() health.Status {
queueSize, err := database.CountNotificationsToSend()
return health.Status{IsEssential: false, IsHealthy: err == nil, Details: struct{ QueueSize int }{QueueSize: queueSize}}
}

64
updater/fetchers.go Normal file
View File

@ -0,0 +1,64 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package updater
import (
"github.com/coreos/quay-sec/database"
"github.com/coreos/quay-sec/utils/types"
)
var fetchers = make(map[string]Fetcher)
// Fetcher represents anything that can fetch vulnerabilities.
type Fetcher interface {
FetchUpdate() (FetcherResponse, error)
}
// FetcherResponse represents the sum of results of an update.
type FetcherResponse struct {
FlagName string
FlagValue string
Notes []string
Vulnerabilities []FetcherVulnerability
}
// FetcherVulnerability represents an individual vulnerability processed from
// an update.
type FetcherVulnerability struct {
ID string
Link string
Description string
Priority types.Priority
FixedIn []*database.Package
}
// RegisterFetcher makes a Fetcher available by the provided name.
// If Register is called twice with the same name or if driver is nil,
// it panics.
func RegisterFetcher(name string, f Fetcher) {
if name == "" {
panic("updater: could not register a Fetcher with an empty name")
}
if f == nil {
panic("updater: could not register a nil Fetcher")
}
if _, dup := fetchers[name]; dup {
panic("updater: RegisterFetcher called twice for " + name)
}
fetchers[name] = f
}

240
updater/fetchers/debian.go Normal file
View File

@ -0,0 +1,240 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package fetchers
import (
"crypto/sha1"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"net/http"
"strings"
"github.com/coreos/quay-sec/database"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/coreos/quay-sec/updater"
"github.com/coreos/quay-sec/utils/types"
)
const (
url = "https://security-tracker.debian.org/tracker/data/json"
cveURLPrefix = "https://security-tracker.debian.org/tracker"
debianUpdaterFlag = "debianUpdater"
)
type jsonData map[string]map[string]jsonVuln
type jsonVuln struct {
Description string `json:"description"`
Releases map[string]jsonRel `json:"releases"`
}
type jsonRel struct {
FixedVersion string `json:"fixed_version"`
Status string `json:"status"`
Urgency string `json:"urgency"`
}
// DebianFetcher implements updater.Fetcher for the Debian Security Tracker
// (https://security-tracker.debian.org).
type DebianFetcher struct{}
func init() {
updater.RegisterFetcher("debian", &DebianFetcher{})
}
// FetchUpdate fetches vulnerability updates from the Debian Security Tracker.
func (fetcher *DebianFetcher) FetchUpdate() (resp updater.FetcherResponse, err error) {
log.Info("fetching Debian vulneratibilities")
// Download JSON.
r, err := http.Get(url)
if err != nil {
log.Errorf("could not download Debian's update: %s", err)
return resp, cerrors.ErrCouldNotDownload
}
// Get the SHA-1 of the latest update's JSON data
latestHash, err := database.GetFlagValue(debianUpdaterFlag)
if err != nil {
return resp, err
}
// Parse the JSON.
resp, err = buildResponse(r.Body, latestHash)
if err != nil {
return resp, err
}
return resp, nil
}
func buildResponse(jsonReader io.Reader, latestKnownHash string) (resp updater.FetcherResponse, err error) {
hash := latestKnownHash
// Defer the addition of flag information to the response.
defer func() {
if err == nil {
resp.FlagName = debianUpdaterFlag
resp.FlagValue = hash
}
}()
// Create a TeeReader so that we can unmarshal into JSON and write to a SHA-1
// digest at the same time.
jsonSHA := sha1.New()
teedJSONReader := io.TeeReader(jsonReader, jsonSHA)
// Unmarshal JSON.
var data jsonData
err = json.NewDecoder(teedJSONReader).Decode(&data)
if err != nil {
log.Errorf("could not unmarshal Debian's JSON: %s", err)
return resp, ErrCouldNotParse
}
// Calculate the hash and skip updating if the hash has been seen before.
hash = hex.EncodeToString(jsonSHA.Sum(nil))
if latestKnownHash == hash {
log.Debug("no Debian update")
return resp, nil
}
// Extract vulnerability data from Debian's JSON schema.
vulnerabilities, unknownReleases := parseDebianJSON(&data)
// Log unknown releases
for k := range unknownReleases {
note := fmt.Sprintf("Debian %s is not mapped to any version number (eg. Jessie->8). Please update me.", k)
resp.Notes = append(resp.Notes, note)
log.Warning(note)
}
// Convert the vulnerabilities map to a slice in the response
for _, v := range vulnerabilities {
resp.Vulnerabilities = append(resp.Vulnerabilities, v)
}
return resp, nil
}
func parseDebianJSON(data *jsonData) (vulnerabilities map[string]updater.FetcherVulnerability, unknownReleases map[string]struct{}) {
vulnerabilities = make(map[string]updater.FetcherVulnerability)
unknownReleases = make(map[string]struct{})
for pkgName, pkgNode := range *data {
for vulnName, vulnNode := range pkgNode {
for releaseName, releaseNode := range vulnNode.Releases {
// Attempt to detect the release number.
if _, isReleaseKnown := database.DebianReleasesMapping[releaseName]; !isReleaseKnown {
unknownReleases[releaseName] = struct{}{}
continue
}
// Skip if the release is not affected.
if releaseNode.FixedVersion == "0" || releaseNode.Status == "undetermined" {
continue
}
// Get or create the vulnerability.
vulnerability, vulnerabilityAlreadyExists := vulnerabilities[vulnName]
if !vulnerabilityAlreadyExists {
vulnerability = updater.FetcherVulnerability{
ID: vulnName,
Link: strings.Join([]string{cveURLPrefix, "/", vulnName}, ""),
Priority: types.Unknown,
Description: vulnNode.Description,
}
}
// Set the priority of the vulnerability.
// In the JSON, a vulnerability has one urgency per package it affects.
// The highest urgency should be the one set.
urgency := urgencyToPriority(releaseNode.Urgency)
if urgency.Compare(vulnerability.Priority) > 0 {
vulnerability.Priority = urgency
}
// Determine the version of the package the vulnerability affects.
var version types.Version
var err error
if releaseNode.Status == "open" {
// Open means that the package is currently vulnerable in the latest
// version of this Debian release.
version = types.MaxVersion
} else if releaseNode.Status == "resolved" {
// Resolved means that the vulnerability has been fixed in
// "fixed_version" (if affected).
version, err = types.NewVersion(releaseNode.FixedVersion)
if err != nil {
log.Warningf("could not parse package version '%s': %s. skipping", releaseNode.FixedVersion, err.Error())
continue
}
}
// Create and add the package.
pkg := &database.Package{
OS: "debian:" + database.DebianReleasesMapping[releaseName],
Name: pkgName,
Version: version,
}
vulnerability.FixedIn = append(vulnerability.FixedIn, pkg)
// Store the vulnerability.
vulnerabilities[vulnName] = vulnerability
}
}
}
return
}
func urgencyToPriority(urgency string) types.Priority {
switch urgency {
case "not yet assigned":
return types.Unknown
case "end-of-life":
fallthrough
case "unimportant":
return types.Negligible
case "low":
fallthrough
case "low*":
fallthrough
case "low**":
return types.Low
case "medium":
fallthrough
case "medium*":
fallthrough
case "medium**":
return types.Medium
case "high":
fallthrough
case "high*":
fallthrough
case "high**":
return types.High
default:
log.Warningf("could not determine vulnerability priority from: %s", urgency)
return types.Unknown
}
}

View File

@ -0,0 +1,80 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package fetchers
import (
"os"
"path"
"runtime"
"testing"
"github.com/coreos/quay-sec/database"
"github.com/coreos/quay-sec/utils/types"
"github.com/stretchr/testify/assert"
)
func TestDebianParser(t *testing.T) {
_, filename, _, _ := runtime.Caller(0)
// Test parsing testdata/fetcher_debian_test.json
testFile, _ := os.Open(path.Join(path.Dir(filename)) + "/testdata/fetcher_debian_test.json")
response, err := buildResponse(testFile, "")
if assert.Nil(t, err) && assert.Len(t, response.Vulnerabilities, 2) {
for _, vulnerability := range response.Vulnerabilities {
if vulnerability.ID == "CVE-2015-1323" {
assert.Equal(t, "https://security-tracker.debian.org/tracker/CVE-2015-1323", vulnerability.Link)
assert.Equal(t, types.Low, vulnerability.Priority)
assert.Equal(t, "This vulnerability is not very dangerous.", vulnerability.Description)
if assert.Len(t, vulnerability.FixedIn, 2) {
assert.Contains(t, vulnerability.FixedIn, &database.Package{
OS: "debian:8",
Name: "aptdaemon",
Version: types.MaxVersion,
})
assert.Contains(t, vulnerability.FixedIn, &database.Package{
OS: "debian:unstable",
Name: "aptdaemon",
Version: types.NewVersionUnsafe("1.1.1+bzr982-1"),
})
}
} else if vulnerability.ID == "CVE-2003-0779" {
assert.Equal(t, "https://security-tracker.debian.org/tracker/CVE-2003-0779", vulnerability.Link)
assert.Equal(t, types.High, vulnerability.Priority)
assert.Equal(t, "But this one is very dangerous.", vulnerability.Description)
if assert.Len(t, vulnerability.FixedIn, 3) {
assert.Contains(t, vulnerability.FixedIn, &database.Package{
OS: "debian:8",
Name: "aptdaemon",
Version: types.NewVersionUnsafe("0.7.0"),
})
assert.Contains(t, vulnerability.FixedIn, &database.Package{
OS: "debian:unstable",
Name: "aptdaemon",
Version: types.NewVersionUnsafe("0.7.0"),
})
assert.Contains(t, vulnerability.FixedIn, &database.Package{
OS: "debian:8",
Name: "asterisk",
Version: types.NewVersionUnsafe("0.5.56"),
})
}
} else {
assert.Fail(t, "Wrong vulnerability name: ", vulnerability.ID)
}
}
}
}

View File

@ -0,0 +1,32 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package fetchers implements vulnerability fetchers for several sources.
package fetchers
import (
"errors"
"github.com/coreos/pkg/capnslog"
)
var (
log = capnslog.NewPackageLogger("github.com/coreos/quay-sec", "updater/fetchers")
// ErrCouldNotParse is returned when a fetcher fails to parse the update data.
ErrCouldNotParse = errors.New("updater/fetchers: could not parse")
// ErrFilesystem is returned when a fetcher fails to interact with the local filesystem.
ErrFilesystem = errors.New("updater/fetchers: something went wrong when interacting with the fs")
)

353
updater/fetchers/rhel.go Normal file
View File

@ -0,0 +1,353 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package fetchers
import (
"bufio"
"encoding/xml"
"io"
"net/http"
"regexp"
"strconv"
"strings"
"github.com/coreos/quay-sec/database"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/coreos/quay-sec/updater"
"github.com/coreos/quay-sec/utils/types"
)
const (
// Before this RHSA, it deals only with RHEL <= 4.
firstRHEL5RHSA = 20070044
firstConsideredRHEL = 5
ovalURI = "https://www.redhat.com/security/data/oval/"
rhsaFilePrefix = "com.redhat.rhsa-"
rhelUpdaterFlag = "rhelUpdater"
)
var (
ignoredCriterions = []string{
" is signed with Red Hat ",
" Client is installed",
" Workstation is installed",
" ComputeNode is installed",
}
rhsaRegexp = regexp.MustCompile(`com.redhat.rhsa-(\d+).xml`)
)
type oval struct {
Definitions []definition `xml:"definitions>definition"`
}
type definition struct {
Title string `xml:"metadata>title"`
Description string `xml:"metadata>description"`
References []reference `xml:"metadata>reference"`
Criteria criteria `xml:"criteria"`
}
type reference struct {
Source string `xml:"source,attr"`
URI string `xml:"ref_url,attr"`
}
type criteria struct {
Operator string `xml:"operator,attr"`
Criterias []*criteria `xml:"criteria"`
Criterions []criterion `xml:"criterion"`
}
type criterion struct {
Comment string `xml:"comment,attr"`
}
// RHELFetcher implements updater.Fetcher and gets vulnerability updates from
// the Red Hat OVAL definitions.
type RHELFetcher struct{}
func init() {
updater.RegisterFetcher("Red Hat", &RHELFetcher{})
}
// FetchUpdate gets vulnerability updates from the Red Hat OVAL definitions.
func (f *RHELFetcher) FetchUpdate() (resp updater.FetcherResponse, err error) {
log.Info("fetching Red Hat vulneratibilities")
// Get the first RHSA we have to manage.
flagValue, err := database.GetFlagValue(rhelUpdaterFlag)
if err != nil {
return resp, err
}
firstRHSA, err := strconv.Atoi(flagValue)
if firstRHSA == 0 || err != nil {
firstRHSA = firstRHEL5RHSA
}
// Fetch the update list.
r, err := http.Get(ovalURI)
if err != nil {
log.Errorf("could not download RHEL's update list: %s", err)
return resp, cerrors.ErrCouldNotDownload
}
// Get the list of RHSAs that we have to process.
var rhsaList []int
scanner := bufio.NewScanner(r.Body)
for scanner.Scan() {
line := scanner.Text()
r := rhsaRegexp.FindStringSubmatch(line)
if len(r) == 2 {
rhsaNo, _ := strconv.Atoi(r[1])
if rhsaNo > firstRHSA {
rhsaList = append(rhsaList, rhsaNo)
}
}
}
for _, rhsa := range rhsaList {
// Download the RHSA's XML file.
r, err := http.Get(ovalURI + rhsaFilePrefix + strconv.Itoa(rhsa) + ".xml")
if err != nil {
log.Errorf("could not download RHEL's update file: %s", err)
return resp, cerrors.ErrCouldNotDownload
}
// Parse the XML.
vs, err := parseRHSA(r.Body)
if err != nil {
return resp, err
}
// Collect vulnerabilities.
for _, v := range vs {
if len(v.FixedIn) > 0 {
resp.Vulnerabilities = append(resp.Vulnerabilities, v)
}
}
}
// Set the flag if we found anything.
if len(rhsaList) > 0 {
resp.FlagName = rhelUpdaterFlag
resp.FlagValue = strconv.Itoa(rhsaList[len(rhsaList)-1])
} else {
log.Debug("no Red Hat update.")
}
return resp, nil
}
func parseRHSA(ovalReader io.Reader) (vulnerabilities []updater.FetcherVulnerability, err error) {
// Decode the XML.
var ov oval
err = xml.NewDecoder(ovalReader).Decode(&ov)
if err != nil {
log.Errorf("could not decode RHEL's XML: %s.", err)
err = ErrCouldNotParse
return
}
// Iterate over the definitions and collect any vulnerabilities that affect
// more than one package.
for _, definition := range ov.Definitions {
packages := toPackages(definition.Criteria)
if len(packages) > 0 {
vuln := updater.FetcherVulnerability{
ID: name(definition),
Link: link(definition),
Priority: priority(definition),
Description: description(definition),
FixedIn: packages,
}
vulnerabilities = append(vulnerabilities, vuln)
}
}
return
}
func getCriterions(node criteria) [][]criterion {
// Filter useless criterions.
var criterions []criterion
for _, c := range node.Criterions {
ignored := false
for _, ignoredItem := range ignoredCriterions {
if strings.Contains(c.Comment, ignoredItem) {
ignored = true
break
}
}
if !ignored {
criterions = append(criterions, c)
}
}
if node.Operator == "AND" {
return [][]criterion{criterions}
} else if node.Operator == "OR" {
var possibilities [][]criterion
for _, c := range criterions {
possibilities = append(possibilities, []criterion{c})
}
return possibilities
}
return [][]criterion{}
}
func getPossibilities(node criteria) [][]criterion {
if len(node.Criterias) == 0 {
return getCriterions(node)
}
var possibilitiesToCompose [][][]criterion
for _, criteria := range node.Criterias {
possibilitiesToCompose = append(possibilitiesToCompose, getPossibilities(*criteria))
}
if len(node.Criterions) > 0 {
possibilitiesToCompose = append(possibilitiesToCompose, getCriterions(node))
}
var possibilities [][]criterion
if node.Operator == "AND" {
for _, possibility := range possibilitiesToCompose[0] {
possibilities = append(possibilities, possibility)
}
for _, possibilityGroup := range possibilitiesToCompose[1:] {
var newPossibilities [][]criterion
for _, possibility := range possibilities {
for _, possibilityInGroup := range possibilityGroup {
var p []criterion
p = append(p, possibility...)
p = append(p, possibilityInGroup...)
newPossibilities = append(newPossibilities, p)
}
}
possibilities = newPossibilities
}
} else if node.Operator == "OR" {
for _, possibilityGroup := range possibilitiesToCompose {
for _, possibility := range possibilityGroup {
possibilities = append(possibilities, possibility)
}
}
}
return possibilities
}
func toPackages(criteria criteria) []*database.Package {
// There are duplicates in Red Hat .xml files.
// This map is for deduplication.
packagesParameters := make(map[string]*database.Package)
possibilities := getPossibilities(criteria)
for _, criterions := range possibilities {
var (
pkg database.Package
osVersion int
err error
)
// Attempt to parse package data from trees of criterions.
for _, c := range criterions {
if strings.Contains(c.Comment, " is installed") {
const prefixLen = len("Red Hat Enterprise Linux ")
osVersion, err = strconv.Atoi(strings.TrimSpace(c.Comment[prefixLen : prefixLen+strings.Index(c.Comment[prefixLen:], " ")]))
if err != nil {
log.Warningf("could not parse Red Hat release version from: '%s'.", c.Comment)
}
} else if strings.Contains(c.Comment, " is earlier than ") {
const prefixLen = len(" is earlier than ")
pkg.Name = strings.TrimSpace(c.Comment[:strings.Index(c.Comment, " is earlier than ")])
pkg.Version, err = types.NewVersion(c.Comment[strings.Index(c.Comment, " is earlier than ")+prefixLen:])
if err != nil {
log.Warningf("could not parse package version '%s': %s. skipping", c.Comment[strings.Index(c.Comment, " is earlier than ")+prefixLen:], err.Error())
}
}
}
if osVersion > firstConsideredRHEL {
pkg.OS = "centos" + ":" + strconv.Itoa(osVersion)
} else {
continue
}
if pkg.OS != "" && pkg.Name != "" && pkg.Version.String() != "" {
packagesParameters[pkg.Key()] = &pkg
} else {
log.Warningf("could not determine a valid package from criterions: %v", criterions)
}
}
// Convert the map to slice.
var packagesParametersArray []*database.Package
for _, p := range packagesParameters {
packagesParametersArray = append(packagesParametersArray, p)
}
return packagesParametersArray
}
func description(def definition) (desc string) {
// It is much more faster to proceed like this than using a Replacer.
desc = strings.Replace(def.Description, "\n\n\n", " ", -1)
desc = strings.Replace(desc, "\n\n", " ", -1)
desc = strings.Replace(desc, "\n", " ", -1)
return
}
func name(def definition) string {
return strings.TrimSpace(def.Title[:strings.Index(def.Title, ": ")])
}
func link(def definition) (link string) {
for _, reference := range def.References {
if reference.Source == "RHSA" {
link = reference.URI
break
}
}
return
}
func priority(def definition) types.Priority {
// Parse the priority.
priority := strings.TrimSpace(def.Title[strings.LastIndex(def.Title, "(")+1 : len(def.Title)-1])
// Normalize the priority.
switch priority {
case "Low":
return types.Low
case "Moderate":
return types.Medium
case "Important":
return types.High
case "Critical":
return types.Critical
default:
log.Warning("could not determine vulnerability priority from: %s.", priority)
return types.Unknown
}
}

View File

@ -0,0 +1,82 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package fetchers
import (
"os"
"path"
"runtime"
"testing"
"github.com/coreos/quay-sec/database"
"github.com/coreos/quay-sec/utils/types"
"github.com/stretchr/testify/assert"
)
func TestRHELParser(t *testing.T) {
_, filename, _, _ := runtime.Caller(0)
path := path.Join(path.Dir(filename))
// Test parsing testdata/fetcher_rhel_test.1.xml
testFile, _ := os.Open(path + "/testdata/fetcher_rhel_test.1.xml")
vulnerabilities, err := parseRHSA(testFile)
if assert.Nil(t, err) && assert.Len(t, vulnerabilities, 1) {
assert.Equal(t, "RHSA-2015:1193", vulnerabilities[0].ID)
assert.Equal(t, "https://rhn.redhat.com/errata/RHSA-2015-1193.html", vulnerabilities[0].Link)
assert.Equal(t, types.Medium, vulnerabilities[0].Priority)
assert.Equal(t, `Xerces-C is a validating XML parser written in a portable subset of C++. A flaw was found in the way the Xerces-C XML parser processed certain XML documents. A remote attacker could provide specially crafted XML input that, when parsed by an application using Xerces-C, would cause that application to crash.`, vulnerabilities[0].Description)
if assert.Len(t, vulnerabilities[0].FixedIn, 3) {
assert.Contains(t, vulnerabilities[0].FixedIn, &database.Package{
OS: "centos:7",
Name: "xerces-c",
Version: types.NewVersionUnsafe("3.1.1-7.el7_1"),
})
assert.Contains(t, vulnerabilities[0].FixedIn, &database.Package{
OS: "centos:7",
Name: "xerces-c-devel",
Version: types.NewVersionUnsafe("3.1.1-7.el7_1"),
})
assert.Contains(t, vulnerabilities[0].FixedIn, &database.Package{
OS: "centos:7",
Name: "xerces-c-doc",
Version: types.NewVersionUnsafe("3.1.1-7.el7_1"),
})
}
}
// Test parsing testdata/fetcher_rhel_test.2.xml
testFile, _ = os.Open(path + "/testdata/fetcher_rhel_test.2.xml")
vulnerabilities, err = parseRHSA(testFile)
if assert.Nil(t, err) && assert.Len(t, vulnerabilities, 1) {
assert.Equal(t, "RHSA-2015:1207", vulnerabilities[0].ID)
assert.Equal(t, "https://rhn.redhat.com/errata/RHSA-2015-1207.html", vulnerabilities[0].Link)
assert.Equal(t, types.Critical, vulnerabilities[0].Priority)
assert.Equal(t, `Mozilla Firefox is an open source web browser. XULRunner provides the XUL Runtime environment for Mozilla Firefox. Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.`, vulnerabilities[0].Description)
if assert.Len(t, vulnerabilities[0].FixedIn, 2) {
assert.Contains(t, vulnerabilities[0].FixedIn, &database.Package{
OS: "centos:6",
Name: "firefox",
Version: types.NewVersionUnsafe("38.1.0-1.el6_6"),
})
assert.Contains(t, vulnerabilities[0].FixedIn, &database.Package{
OS: "centos:7",
Name: "firefox",
Version: types.NewVersionUnsafe("38.1.0-1.el7_1"),
})
}
}
}

View File

@ -0,0 +1,99 @@
{
"aptdaemon": {
"CVE-2015-1323": {
"_comment": "Two standard cases with a non-fixed package and a fixed one.",
"description": "This vulnerability is not very dangerous.",
"debianbug": 789162,
"releases": {
"wheezy": {
"repositories": {
"jessie": "bad version"
},
"status": "resolved",
"urgency": "low**"
},
"jessie": {
"repositories": {
"jessie": "1.1.1-4"
},
"status": "open",
"urgency": "low**"
},
"sid": {
"fixed_version": "1.1.1+bzr982-1",
"repositories": {
"sid": "1.1.1+bzr982-1"
},
"status": "resolved",
"urgency": "not yet assigned"
}
}
},
"CVE-2003-0779": {
"_comment": "Just another CVE affecting the same package.",
"description": "But this one is very dangerous.",
"releases": {
"jessie": {
"fixed_version": "0.7.0",
"repositories": {
"jessie": "1:11.13.1~dfsg-2"
},
"status": "resolved",
"urgency": "high**"
},
"sid": {
"fixed_version": "0.7.0",
"repositories": {
"sid": "1:13.1.0~dfsg-1.1"
},
"status": "resolved",
"urgency": "high**"
}
}
}
},
"asterisk": {
"CVE-2013-2685": {
"description": "Un-affected packages",
"releases": {
"jessie": {
"fixed_version": "0",
"repositories": {
"jessie": "1:11.13.1~dfsg-2"
},
"status": "resolved",
"urgency": "unimportant"
},
"wheezy": {
"repositories": {
"sid": "1:13.1.0~dfsg-1.1"
},
"status": "undetermined",
"urgency": "unimportant"
},
"sid": {
"fixed_version": "0",
"repositories": {
"sid": "1:13.1.0~dfsg-1.1"
},
"status": "resolved",
"urgency": "unimportant"
}
}
},
"CVE-2003-0779": {
"_comment": "A CVE which affect aptdaemon, and which also affects asterisk",
"description": "But this one is very dangerous.",
"releases": {
"jessie": {
"fixed_version": "0.5.56",
"repositories": {
"jessie": "1:1.17.2"
},
"status": "resolved",
"urgency": "high"
}
}
}
}
}

View File

@ -0,0 +1,154 @@
<?xml version="1.0" encoding="UTF-8"?>
<oval_definitions xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5" xmlns:oval="http://oval.mitre.org/XMLSchema/oval-common-5" xmlns:oval-def="http://oval.mitre.org/XMLSchema/oval-definitions-5" xmlns:unix-def="http://oval.mitre.org/XMLSchema/oval-definitions-5#unix" xmlns:red-def="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://oval.mitre.org/XMLSchema/oval-common-5 oval-common-schema.xsd http://oval.mitre.org/XMLSchema/oval-definitions-5 oval-definitions-schema.xsd http://oval.mitre.org/XMLSchema/oval-definitions-5#unix unix-definitions-schema.xsd http://oval.mitre.org/XMLSchema/oval-definitions-5#linux linux-definitions-schema.xsd">
<generator>
<oval:product_name>Red Hat Errata System</oval:product_name>
<oval:schema_version>5.10.1</oval:schema_version>
<oval:timestamp>2015-06-29T12:11:23</oval:timestamp>
</generator>
<definitions>
<definition id="oval:com.redhat.rhsa:def:20151193" version="601" class="patch">
<metadata>
<title>RHSA-2015:1193: xerces-c security update (Moderate)</title>
<affected family="unix">
<platform>Red Hat Enterprise Linux 7</platform>
</affected>
<reference source="RHSA" ref_id="RHSA-2015:1193-00" ref_url="https://rhn.redhat.com/errata/RHSA-2015-1193.html"/>
<reference source="CVE" ref_id="CVE-2015-0252" ref_url="https://access.redhat.com/security/cve/CVE-2015-0252"/>
<description>Xerces-C is a validating XML parser written in a portable subset of C++.
A flaw was found in the way the Xerces-C XML parser processed certain XML
documents. A remote attacker could provide specially crafted XML input
that, when parsed by an application using Xerces-C, would cause that
application to crash.</description>
<!-- ~~~~~~~~~~~~~~~~~~~~ advisory details ~~~~~~~~~~~~~~~~~~~ -->
<advisory from="secalert@redhat.com">
<severity>Moderate</severity>
<rights>Copyright 2015 Red Hat, Inc.</rights>
<issued date="2015-06-29"/>
<updated date="2015-06-29"/>
<cve href="https://access.redhat.com/security/cve/CVE-2015-0252">CVE-2015-0252</cve>
<bugzilla href="https://bugzilla.redhat.com/1199103" id="1199103">CVE-2015-0252 xerces-c: crashes on malformed input</bugzilla>
<affected_cpe_list>
<cpe>cpe:/o:redhat:enterprise_linux:7</cpe>
</affected_cpe_list>
</advisory>
</metadata>
<criteria operator="AND">
<criteria operator="OR">
<criterion test_ref="oval:com.redhat.rhsa:tst:20151193001" comment="Red Hat Enterprise Linux 7 Client is installed" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151193002" comment="Red Hat Enterprise Linux 7 Server is installed" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151193003" comment="Red Hat Enterprise Linux 7 Workstation is installed" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151193004" comment="Red Hat Enterprise Linux 7 ComputeNode is installed" />
</criteria>
<criteria operator="OR">
<criteria operator="AND">
<criterion test_ref="oval:com.redhat.rhsa:tst:20151193005" comment="xerces-c is earlier than 0:3.1.1-7.el7_1" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151193006" comment="xerces-c is signed with Red Hat redhatrelease2 key" />
</criteria>
<criteria operator="AND">
<criterion test_ref="oval:com.redhat.rhsa:tst:20151193007" comment="xerces-c-devel is earlier than 0:3.1.1-7.el7_1" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151193008" comment="xerces-c-devel is signed with Red Hat redhatrelease2 key" />
</criteria>
<criteria operator="AND">
<criterion test_ref="oval:com.redhat.rhsa:tst:20151193009" comment="xerces-c-doc is earlier than 0:3.1.1-7.el7_1" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151193010" comment="xerces-c-doc is signed with Red Hat redhatrelease2 key" />
</criteria>
<criteria operator="AND">
<criterion test_ref="oval:com.redhat.rhsa:tst:20151193009" comment="xerces-c-x is earlier than invalid version" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151193010" comment="xerces-c-doc is signed with Red Hat redhatrelease2 key" />
</criteria>
</criteria>
</criteria>
</definition>
</definitions>
<tests>
<!-- ~~~~~~~~~~~~~~~~~~~~~ rpminfo tests ~~~~~~~~~~~~~~~~~~~~~ -->
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151193001" version="601" comment="Red Hat Enterprise Linux 7 Client is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151193001" />
<state state_ref="oval:com.redhat.rhsa:ste:20151193002" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151193002" version="601" comment="Red Hat Enterprise Linux 7 Server is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151193002" />
<state state_ref="oval:com.redhat.rhsa:ste:20151193002" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151193003" version="601" comment="Red Hat Enterprise Linux 7 Workstation is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151193003" />
<state state_ref="oval:com.redhat.rhsa:ste:20151193002" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151193004" version="601" comment="Red Hat Enterprise Linux 7 ComputeNode is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151193004" />
<state state_ref="oval:com.redhat.rhsa:ste:20151193002" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151193005" version="601" comment="xerces-c is earlier than 0:3.1.1-7.el7_1" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151193005" />
<state state_ref="oval:com.redhat.rhsa:ste:20151193003" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151193006" version="601" comment="xerces-c is signed with Red Hat redhatrelease2 key" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151193005" />
<state state_ref="oval:com.redhat.rhsa:ste:20151193001" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151193007" version="601" comment="xerces-c-devel is earlier than 0:3.1.1-7.el7_1" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151193006" />
<state state_ref="oval:com.redhat.rhsa:ste:20151193003" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151193008" version="601" comment="xerces-c-devel is signed with Red Hat redhatrelease2 key" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151193006" />
<state state_ref="oval:com.redhat.rhsa:ste:20151193001" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151193009" version="601" comment="xerces-c-doc is earlier than 0:3.1.1-7.el7_1" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151193007" />
<state state_ref="oval:com.redhat.rhsa:ste:20151193003" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151193010" version="601" comment="xerces-c-doc is signed with Red Hat redhatrelease2 key" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151193007" />
<state state_ref="oval:com.redhat.rhsa:ste:20151193001" />
</rpminfo_test>
</tests>
<objects>
<!-- ~~~~~~~~~~~~~~~~~~~~ rpminfo objects ~~~~~~~~~~~~~~~~~~~~ -->
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151193001" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>redhat-release-client</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151193004" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>redhat-release-computenode</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151193002" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>redhat-release-server</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151193003" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>redhat-release-workstation</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151193005" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>xerces-c</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151193006" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>xerces-c-devel</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151193007" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>xerces-c-doc</name>
</rpminfo_object>
</objects>
<states>
<!-- ~~~~~~~~~~~~~~~~~~~~ rpminfo states ~~~~~~~~~~~~~~~~~~~~~ -->
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151193001" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<signature_keyid operation="equals">199e2f91fd431d51</signature_keyid>
</rpminfo_state>
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151193002" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<version operation="pattern match">^7[^\d]</version>
</rpminfo_state>
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151193003" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<evr datatype="evr_string" operation="less than">0:3.1.1-7.el7_1</evr>
</rpminfo_state>
</states>
</oval_definitions>

View File

@ -0,0 +1,224 @@
<?xml version="1.0" encoding="UTF-8"?>
<oval_definitions xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5" xmlns:oval="http://oval.mitre.org/XMLSchema/oval-common-5" xmlns:oval-def="http://oval.mitre.org/XMLSchema/oval-definitions-5" xmlns:unix-def="http://oval.mitre.org/XMLSchema/oval-definitions-5#unix" xmlns:red-def="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://oval.mitre.org/XMLSchema/oval-common-5 oval-common-schema.xsd http://oval.mitre.org/XMLSchema/oval-definitions-5 oval-definitions-schema.xsd http://oval.mitre.org/XMLSchema/oval-definitions-5#unix unix-definitions-schema.xsd http://oval.mitre.org/XMLSchema/oval-definitions-5#linux linux-definitions-schema.xsd">
<generator>
<oval:product_name>Red Hat Errata System</oval:product_name>
<oval:schema_version>5.10.1</oval:schema_version>
<oval:timestamp>2015-07-03T01:12:29</oval:timestamp>
</generator>
<definitions>
<definition id="oval:com.redhat.rhsa:def:20151207" version="601" class="patch">
<metadata>
<title>RHSA-2015:1207: firefox security update (Critical)</title>
<affected family="unix">
<platform>Red Hat Enterprise Linux 7</platform>
<platform>Red Hat Enterprise Linux 6</platform>
<platform>Red Hat Enterprise Linux 5</platform>
</affected>
<reference source="RHSA" ref_id="RHSA-2015:1207-00" ref_url="https://rhn.redhat.com/errata/RHSA-2015-1207.html"/>
<reference source="CVE" ref_id="CVE-2015-2722" ref_url="https://access.redhat.com/security/cve/CVE-2015-2722"/>
<reference source="CVE" ref_id="CVE-2015-2724" ref_url="https://access.redhat.com/security/cve/CVE-2015-2724"/>
<reference source="CVE" ref_id="CVE-2015-2725" ref_url="https://access.redhat.com/security/cve/CVE-2015-2725"/>
<reference source="CVE" ref_id="CVE-2015-2727" ref_url="https://access.redhat.com/security/cve/CVE-2015-2727"/>
<reference source="CVE" ref_id="CVE-2015-2728" ref_url="https://access.redhat.com/security/cve/CVE-2015-2728"/>
<reference source="CVE" ref_id="CVE-2015-2729" ref_url="https://access.redhat.com/security/cve/CVE-2015-2729"/>
<reference source="CVE" ref_id="CVE-2015-2731" ref_url="https://access.redhat.com/security/cve/CVE-2015-2731"/>
<reference source="CVE" ref_id="CVE-2015-2733" ref_url="https://access.redhat.com/security/cve/CVE-2015-2733"/>
<reference source="CVE" ref_id="CVE-2015-2734" ref_url="https://access.redhat.com/security/cve/CVE-2015-2734"/>
<reference source="CVE" ref_id="CVE-2015-2735" ref_url="https://access.redhat.com/security/cve/CVE-2015-2735"/>
<reference source="CVE" ref_id="CVE-2015-2736" ref_url="https://access.redhat.com/security/cve/CVE-2015-2736"/>
<reference source="CVE" ref_id="CVE-2015-2737" ref_url="https://access.redhat.com/security/cve/CVE-2015-2737"/>
<reference source="CVE" ref_id="CVE-2015-2738" ref_url="https://access.redhat.com/security/cve/CVE-2015-2738"/>
<reference source="CVE" ref_id="CVE-2015-2739" ref_url="https://access.redhat.com/security/cve/CVE-2015-2739"/>
<reference source="CVE" ref_id="CVE-2015-2740" ref_url="https://access.redhat.com/security/cve/CVE-2015-2740"/>
<reference source="CVE" ref_id="CVE-2015-2741" ref_url="https://access.redhat.com/security/cve/CVE-2015-2741"/>
<reference source="CVE" ref_id="CVE-2015-2743" ref_url="https://access.redhat.com/security/cve/CVE-2015-2743"/>
<description>Mozilla Firefox is an open source web browser. XULRunner provides the XUL
Runtime environment for Mozilla Firefox.
Several flaws were found in the processing of malformed web content. A web
page containing malicious content could cause Firefox to crash or,
potentially, execute arbitrary code with the privileges of the user running
Firefox.</description>
<!-- ~~~~~~~~~~~~~~~~~~~~ advisory details ~~~~~~~~~~~~~~~~~~~ -->
<advisory from="secalert@redhat.com">
<severity>Critical</severity>
<rights>Copyright 2015 Red Hat, Inc.</rights>
<issued date="2015-07-02"/>
<updated date="2015-07-02"/>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2722">CVE-2015-2722</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2724">CVE-2015-2724</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2725">CVE-2015-2725</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2727">CVE-2015-2727</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2728">CVE-2015-2728</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2729">CVE-2015-2729</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2731">CVE-2015-2731</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2733">CVE-2015-2733</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2734">CVE-2015-2734</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2735">CVE-2015-2735</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2736">CVE-2015-2736</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2737">CVE-2015-2737</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2738">CVE-2015-2738</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2739">CVE-2015-2739</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2740">CVE-2015-2740</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2741">CVE-2015-2741</cve>
<cve href="https://access.redhat.com/security/cve/CVE-2015-2743">CVE-2015-2743</cve>
<bugzilla href="https://bugzilla.redhat.com/1236947" id="1236947">CVE-2015-2724 CVE-2015-2725 Mozilla: Miscellaneous memory safety hazards (rv:31.8 / rv:38.1) (MFSA 2015-59)</bugzilla>
<bugzilla href="https://bugzilla.redhat.com/1236950" id="1236950">CVE-2015-2727 Mozilla: Local files or privileged URLs in pages can be opened into new tabs (MFSA 2015-60)</bugzilla>
<bugzilla href="https://bugzilla.redhat.com/1236951" id="1236951">CVE-2015-2728 Mozilla: Type confusion in Indexed Database Manager (MFSA 2015-61)</bugzilla>
<bugzilla href="https://bugzilla.redhat.com/1236952" id="1236952">CVE-2015-2729 Mozilla: Out-of-bound read while computing an oscillator rendering range in Web Audio (MFSA 2015-62)</bugzilla>
<bugzilla href="https://bugzilla.redhat.com/1236953" id="1236953">CVE-2015-2731 Mozilla: Use-after-free in Content Policy due to microtask execution error (MFSA 2015-63)</bugzilla>
<bugzilla href="https://bugzilla.redhat.com/1236955" id="1236955">CVE-2015-2722 CVE-2015-2733 Mozilla: Use-after-free in workers while using XMLHttpRequest (MFSA 2015-65)</bugzilla>
<bugzilla href="https://bugzilla.redhat.com/1236956" id="1236956">CVE-2015-2734 CVE-2015-2735 CVE-2015-2736 CVE-2015-2737 CVE-2015-2738 CVE-2015-2739 CVE-2015-2740 Mozilla: Vulnerabilities found through code inspection (MFSA 2015-66)</bugzilla>
<bugzilla href="https://bugzilla.redhat.com/1236963" id="1236963">CVE-2015-2741 Mozilla: Key pinning is ignored when overridable errors are encountered (MFSA 2015-67)</bugzilla>
<bugzilla href="https://bugzilla.redhat.com/1236964" id="1236964">CVE-2015-2743 Mozilla: Privilege escalation in PDF.js (MFSA 2015-69)</bugzilla>
<affected_cpe_list>
<cpe>cpe:/o:redhat:enterprise_linux:5</cpe>
<cpe>cpe:/o:redhat:enterprise_linux:6</cpe>
<cpe>cpe:/o:redhat:enterprise_linux:7</cpe>
</affected_cpe_list>
</advisory>
</metadata>
<criteria operator="OR">
<criteria operator="AND">
<criterion test_ref="oval:com.redhat.rhsa:tst:20151207001" comment="Red Hat Enterprise Linux 5 is installed" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151207002" comment="firefox is earlier than 0:38.1.0-1.el5_11" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151207003" comment="firefox is signed with Red Hat redhatrelease key" />
</criteria>
<criteria operator="AND">
<criterion test_ref="oval:com.redhat.rhsa:tst:20151207008" comment="firefox is earlier than 0:38.1.0-1.el6_6" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151207009" comment="firefox is signed with Red Hat redhatrelease2 key" />
<criteria operator="OR">
<criterion test_ref="oval:com.redhat.rhsa:tst:20151207004" comment="Red Hat Enterprise Linux 6 Client is installed" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151207005" comment="Red Hat Enterprise Linux 6 Server is installed" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151207006" comment="Red Hat Enterprise Linux 6 Workstation is installed" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151207007" comment="Red Hat Enterprise Linux 6 ComputeNode is installed" />
</criteria>
</criteria>
<criteria operator="AND">
<criterion test_ref="oval:com.redhat.rhsa:tst:20151207014" comment="firefox is earlier than 0:38.1.0-1.el7_1" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151207009" comment="firefox is signed with Red Hat redhatrelease2 key" />
<criteria operator="OR">
<criterion test_ref="oval:com.redhat.rhsa:tst:20151207010" comment="Red Hat Enterprise Linux 7 Client is installed" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151207011" comment="Red Hat Enterprise Linux 7 Server is installed" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151207012" comment="Red Hat Enterprise Linux 7 Workstation is installed" /><criterion test_ref="oval:com.redhat.rhsa:tst:20151207013" comment="Red Hat Enterprise Linux 7 ComputeNode is installed" />
</criteria>
</criteria>
</criteria>
</definition>
</definitions>
<tests>
<!-- ~~~~~~~~~~~~~~~~~~~~~ rpminfo tests ~~~~~~~~~~~~~~~~~~~~~ -->
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207001" version="601" comment="Red Hat Enterprise Linux 5 is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207001" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207003" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207002" version="601" comment="firefox is earlier than 0:38.1.0-1.el5_11" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207002" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207004" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207003" version="601" comment="firefox is signed with Red Hat redhatrelease key" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207002" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207002" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207004" version="601" comment="Red Hat Enterprise Linux 6 Client is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207003" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207005" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207005" version="601" comment="Red Hat Enterprise Linux 6 Server is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207004" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207005" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207006" version="601" comment="Red Hat Enterprise Linux 6 Workstation is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207005" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207005" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207007" version="601" comment="Red Hat Enterprise Linux 6 ComputeNode is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207006" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207005" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207008" version="601" comment="firefox is earlier than 0:38.1.0-1.el6_6" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207002" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207006" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207009" version="601" comment="firefox is signed with Red Hat redhatrelease2 key" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207002" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207001" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207010" version="601" comment="Red Hat Enterprise Linux 7 Client is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207003" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207007" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207011" version="601" comment="Red Hat Enterprise Linux 7 Server is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207004" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207007" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207012" version="601" comment="Red Hat Enterprise Linux 7 Workstation is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207005" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207007" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207013" version="601" comment="Red Hat Enterprise Linux 7 ComputeNode is installed" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207006" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207007" />
</rpminfo_test>
<rpminfo_test id="oval:com.redhat.rhsa:tst:20151207014" version="601" comment="firefox is earlier than 0:38.1.0-1.el7_1" check="at least one" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<object object_ref="oval:com.redhat.rhsa:obj:20151207002" />
<state state_ref="oval:com.redhat.rhsa:ste:20151207008" />
</rpminfo_test>
</tests>
<objects>
<!-- ~~~~~~~~~~~~~~~~~~~~ rpminfo objects ~~~~~~~~~~~~~~~~~~~~ -->
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151207002" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>firefox</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151207001" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>redhat-release</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151207003" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>redhat-release-client</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151207006" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>redhat-release-computenode</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151207004" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>redhat-release-server</name>
</rpminfo_object>
<rpminfo_object id="oval:com.redhat.rhsa:obj:20151207005" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<name>redhat-release-workstation</name>
</rpminfo_object>
</objects>
<states>
<!-- ~~~~~~~~~~~~~~~~~~~~ rpminfo states ~~~~~~~~~~~~~~~~~~~~~ -->
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151207001" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<signature_keyid operation="equals">199e2f91fd431d51</signature_keyid>
</rpminfo_state>
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151207002" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<signature_keyid operation="equals">5326810137017186</signature_keyid>
</rpminfo_state>
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151207003" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<version operation="pattern match">^5[^\d]</version>
</rpminfo_state>
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151207004" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<evr datatype="evr_string" operation="less than">0:38.1.0-1.el5_11</evr>
</rpminfo_state>
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151207005" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<version operation="pattern match">^6[^\d]</version>
</rpminfo_state>
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151207006" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<evr datatype="evr_string" operation="less than">0:38.1.0-1.el6_6</evr>
</rpminfo_state>
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151207007" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<version operation="pattern match">^7[^\d]</version>
</rpminfo_state>
<rpminfo_state id="oval:com.redhat.rhsa:ste:20151207008" version="601" xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux">
<evr datatype="evr_string" operation="less than">0:38.1.0-1.el7_1</evr>
</rpminfo_state>
</states>
</oval_definitions>

View File

@ -0,0 +1,35 @@
Candidate: CVE-2015-4471
PublicDate: 2015-06-11
References:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-4471
http://www.openwall.com/lists/oss-security/2015/02/03/11
https://github.com/kyz/libmspack/commit/18b6a2cc0b87536015bedd4f7763e6b02d5aa4f3
https://bugs.debian.org/775499
http://openwall.com/lists/oss-security/2015/02/03/11
Description:
Off-by-one error in the lzxd_decompress function in lzxd.c in libmspack
before 0.5 allows remote attackers to cause a denial of service (buffer
under-read and application crash) via a crafted CAB archive.
Ubuntu-Description:
Notes:
Bugs:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=775499
Priority: medium (wrong-syntax)
Discovered-by:
Assigned-to:
Patches_libmspack:
upstream_libmspack: not-affected (0.5-1)
precise_libmspack: DNE
trusty_libmspack: needed
utopic_libmspack: ignored (reached end-of-life)
vivid_libmspack : released ( 0.4-3 )
devel_libmspack: not-affected
unknown_libmspack: needed
Patches_libmspack-anotherpkg: wrong-syntax
wily_libmspack-anotherpkg: released ((0.1)
utopic_libmspack-anotherpkg: not-affected
trusty_libmspack-anotherpkg: needs-triage
precise_libmspack-anotherpkg: released
saucy_libmspack-anotherpkg: needed

414
updater/fetchers/ubuntu.go Normal file
View File

@ -0,0 +1,414 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package fetchers
import (
"bufio"
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"regexp"
"strconv"
"strings"
"github.com/coreos/quay-sec/database"
"github.com/coreos/quay-sec/updater"
"github.com/coreos/quay-sec/utils"
cerrors "github.com/coreos/quay-sec/utils/errors"
"github.com/coreos/quay-sec/utils/types"
)
const (
ubuntuTrackerURI = "https://launchpad.net/ubuntu-cve-tracker"
ubuntuTracker = "lp:ubuntu-cve-tracker"
ubuntuUpdaterFlag = "ubuntuUpdater"
)
var (
repositoryLocalPath string
ubuntuIgnoredReleases = map[string]struct{}{
"upstream": struct{}{},
"devel": struct{}{},
"dapper": struct{}{},
"edgy": struct{}{},
"feisty": struct{}{},
"gutsy": struct{}{},
"hardy": struct{}{},
"intrepid": struct{}{},
"jaunty": struct{}{},
"karmic": struct{}{},
"lucid": struct{}{},
"maverick": struct{}{},
"natty": struct{}{},
"oneiric": struct{}{},
"saucy": struct{}{},
// Syntax error
"Patches": struct{}{},
// Product
"product": struct{}{},
}
branchedRegexp = regexp.MustCompile(`Branched (\d+) revisions.`)
revisionRegexp = regexp.MustCompile(`Now on revision (\d+).`)
affectsCaptureRegexp = regexp.MustCompile(`(?P<release>.*)_(?P<package>.*): (?P<status>[^\s]*)( \(+(?P<note>[^()]*)\)+)?`)
affectsCaptureRegexpNames = affectsCaptureRegexp.SubexpNames()
)
// UbuntuFetcher implements updater.Fetcher and get vulnerability updates from
// the Ubuntu CVE Tracker.
type UbuntuFetcher struct{}
func init() {
updater.RegisterFetcher("Ubuntu", &UbuntuFetcher{})
}
// FetchUpdate gets vulnerability updates from the Ubuntu CVE Tracker.
func (fetcher *UbuntuFetcher) FetchUpdate() (resp updater.FetcherResponse, err error) {
log.Info("fetching Ubuntu vulneratibilities")
// Check to see if the repository does not already exist.
var revisionNumber int
if _, pathExists := os.Stat(repositoryLocalPath); repositoryLocalPath == "" || os.IsNotExist(pathExists) {
// Create a temporary folder and download the repository.
p, err := ioutil.TempDir(os.TempDir(), "ubuntu-cve-tracker")
if err != nil {
return resp, ErrFilesystem
}
// bzr wants an empty target directory.
repositoryLocalPath = p + "/repository"
// Create the new repository.
revisionNumber, err = createRepository(repositoryLocalPath)
if err != nil {
return resp, err
}
} else {
// Update the repository that's already on disk.
revisionNumber, err = updateRepository(repositoryLocalPath)
if err != nil {
return resp, err
}
}
// Get the latest revision number we successfully applied in the database.
dbRevisionNumber, err := database.GetFlagValue("ubuntuUpdater")
if err != nil {
return resp, err
}
// Get the list of vulnerabilities that we have to update.
modifiedCVE, err := collectModifiedVulnerabilities(revisionNumber, dbRevisionNumber, repositoryLocalPath)
if err != nil {
return resp, err
}
// Parse and add the vulnerabilities.
for cvePath := range modifiedCVE {
file, err := os.Open(repositoryLocalPath + "/" + cvePath)
if err != nil {
// This can happen when a file is modified and then moved in another
// commit.
continue
}
defer file.Close()
v, unknownReleases, err := parseUbuntuCVE(file)
if err != nil {
return resp, err
}
if len(v.FixedIn) > 0 {
resp.Vulnerabilities = append(resp.Vulnerabilities, v)
}
// Log any unknown releases.
for k := range unknownReleases {
note := fmt.Sprintf("Ubuntu %s is not mapped to any version number (eg. trusty->14.04). Please update me.", k)
resp.Notes = append(resp.Notes, note)
log.Warning(note)
// If we encountered unknown Ubuntu release, we don't want the revision
// number to be considered as managed.
dbRevisionNumberInt, _ := strconv.Atoi(dbRevisionNumber)
revisionNumber = dbRevisionNumberInt
}
}
// Add flag information
resp.FlagName = ubuntuUpdaterFlag
resp.FlagValue = strconv.Itoa(revisionNumber)
return
}
func collectModifiedVulnerabilities(revision int, dbRevision, repositoryLocalPath string) (map[string]struct{}, error) {
modifiedCVE := make(map[string]struct{})
// Handle a brand new database.
if dbRevision == "" {
for _, folder := range []string{"active", "retired"} {
d, err := os.Open(repositoryLocalPath + "/" + folder)
if err != nil {
log.Errorf("could not open Ubuntu vulnerabilities repository's folder: %s", err)
return nil, ErrFilesystem
}
defer d.Close()
// Get the FileInfo of all the files in the directory.
names, err := d.Readdirnames(-1)
if err != nil {
log.Errorf("could not read Ubuntu vulnerabilities repository's folder:: %s.", err)
return nil, ErrFilesystem
}
// Add the vulnerabilities to the list.
for _, name := range names {
if strings.HasPrefix(name, "CVE-") {
modifiedCVE[folder+"/"+name] = struct{}{}
}
}
}
return modifiedCVE, nil
}
// Handle an up to date database.
dbRevisionInt, _ := strconv.Atoi(dbRevision)
if revision == dbRevisionInt {
log.Debug("no Ubuntu update")
return modifiedCVE, nil
}
// Handle a database that needs upgrading.
out, err := utils.Exec(repositoryLocalPath, "bzr", "log", "--verbose", "-r"+strconv.Itoa(dbRevisionInt+1)+"..", "-n0")
if err != nil {
log.Errorf("could not get Ubuntu vulnerabilities repository logs: %s. output: %s", err, string(out))
return nil, cerrors.ErrCouldNotDownload
}
scanner := bufio.NewScanner(bytes.NewReader(out))
for scanner.Scan() {
text := strings.TrimSpace(scanner.Text())
if strings.Contains(text, "CVE-") && (strings.HasPrefix(text, "active/") || strings.HasPrefix(text, "retired/")) {
if strings.Contains(text, " => ") {
text = text[strings.Index(text, " => ")+4:]
}
modifiedCVE[text] = struct{}{}
}
}
return modifiedCVE, nil
}
func createRepository(pathToRepo string) (int, error) {
// Branch repository
out, err := utils.Exec("/tmp/", "bzr", "branch", ubuntuTracker, pathToRepo)
if err != nil {
log.Errorf("could not branch Ubuntu repository: %s. output: %s", err, string(out))
return 0, cerrors.ErrCouldNotDownload
}
// Get revision number
regexpMatches := branchedRegexp.FindStringSubmatch(string(out))
if len(regexpMatches) != 2 {
log.Error("could not parse bzr branch output to get the revision number")
return 0, cerrors.ErrCouldNotDownload
}
revision, err := strconv.Atoi(regexpMatches[1])
if err != nil {
log.Error("could not parse bzr branch output to get the revision number")
return 0, cerrors.ErrCouldNotDownload
}
return revision, err
}
func updateRepository(pathToRepo string) (int, error) {
// Pull repository
out, err := utils.Exec(pathToRepo, "bzr", "pull", "--overwrite")
if err != nil {
log.Errorf("could not pull Ubuntu repository: %s. output: %s", err, string(out))
return 0, cerrors.ErrCouldNotDownload
}
// Get revision number
if strings.Contains(string(out), "No revisions or tags to pull") {
out, _ = utils.Exec(pathToRepo, "bzr", "revno")
revno, err := strconv.Atoi(string(out[:len(out)-1]))
if err != nil {
log.Errorf("could not parse Ubuntu repository revision number: %s. output: %s", err, string(out))
return 0, cerrors.ErrCouldNotDownload
}
return revno, nil
}
regexpMatches := revisionRegexp.FindStringSubmatch(string(out))
if len(regexpMatches) != 2 {
log.Error("could not parse bzr pull output to get the revision number")
return 0, cerrors.ErrCouldNotDownload
}
revno, err := strconv.Atoi(regexpMatches[1])
if err != nil {
log.Error("could not parse bzr pull output to get the revision number")
return 0, cerrors.ErrCouldNotDownload
}
return revno, nil
}
func parseUbuntuCVE(fileContent io.Reader) (vulnerability updater.FetcherVulnerability, unknownReleases map[string]struct{}, err error) {
unknownReleases = make(map[string]struct{})
readingDescription := false
scanner := bufio.NewScanner(fileContent)
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
// Skip any comments.
if strings.HasPrefix(line, "#") {
continue
}
// Parse the name.
if strings.HasPrefix(line, "Candidate:") {
vulnerability.ID = strings.TrimSpace(strings.TrimPrefix(line, "Candidate:"))
continue
}
// Parse the link.
if vulnerability.Link == "" && strings.HasPrefix(line, "http") {
vulnerability.Link = strings.TrimSpace(line)
continue
}
// Parse the priority.
if strings.HasPrefix(line, "Priority:") {
priority := strings.TrimSpace(strings.TrimPrefix(line, "Priority:"))
// Handle syntax error: Priority: medium (heap-protector)
if strings.Contains(priority, " ") {
priority = priority[:strings.Index(priority, " ")]
}
vulnerability.Priority = ubuntuPriorityToPriority(priority)
continue
}
// Parse the description.
if strings.HasPrefix(line, "Description:") {
readingDescription = true
vulnerability.Description = strings.TrimSpace(strings.TrimPrefix(line, "Description:")) // In case there is a formatting error and the description starts on the same line
continue
}
if readingDescription {
if strings.HasPrefix(line, "Ubuntu-Description:") || strings.HasPrefix(line, "Notes:") || strings.HasPrefix(line, "Bugs:") || strings.HasPrefix(line, "Priority:") || strings.HasPrefix(line, "Discovered-by:") || strings.HasPrefix(line, "Assigned-to:") {
readingDescription = false
} else {
vulnerability.Description = vulnerability.Description + " " + line
continue
}
}
// Try to parse the package that the vulnerability affects.
affectsCaptureArr := affectsCaptureRegexp.FindAllStringSubmatch(line, -1)
if len(affectsCaptureArr) > 0 {
affectsCapture := affectsCaptureArr[0]
md := map[string]string{}
for i, n := range affectsCapture {
md[affectsCaptureRegexpNames[i]] = strings.TrimSpace(n)
}
// Ignore Linux kernels.
if strings.HasPrefix(md["package"], "linux") {
continue
}
// Only consider the package if its status is needed, active, deferred
// or released. Ignore DNE, needs-triage, not-affected, ignored, pending.
if md["status"] == "needed" || md["status"] == "active" || md["status"] == "deferred" || md["status"] == "released" {
if _, isReleaseIgnored := ubuntuIgnoredReleases[md["release"]]; isReleaseIgnored {
continue
}
if _, isReleaseKnown := database.UbuntuReleasesMapping[md["release"]]; !isReleaseKnown {
unknownReleases[md["release"]] = struct{}{}
continue
}
var version types.Version
if md["status"] == "released" {
if md["note"] != "" {
var err error
version, err = types.NewVersion(md["note"])
if err != nil {
log.Warningf("could not parse package version '%s': %s. skipping", md["note"], err)
}
}
} else {
version = types.MaxVersion
}
if version.String() == "" {
continue
}
// Create and add the new package.
vulnerability.FixedIn = append(vulnerability.FixedIn, &database.Package{OS: "ubuntu:" + database.UbuntuReleasesMapping[md["release"]], Name: md["package"], Version: version})
}
}
}
// Trim extra spaces in the description
vulnerability.Description = strings.TrimSpace(vulnerability.Description)
// If no link has been provided (CVE-2006-NNN0 for instance), add the link to the tracker
if vulnerability.Link == "" {
vulnerability.Link = ubuntuTrackerURI
}
// If no priority has been provided (CVE-2007-0667 for instance), set the priority to Unknown
if vulnerability.Priority == "" {
vulnerability.Priority = types.Unknown
}
return
}
func ubuntuPriorityToPriority(priority string) types.Priority {
switch priority {
case "untriaged":
return types.Unknown
case "negligible":
return types.Negligible
case "low":
return types.Low
case "medium":
return types.Medium
case "high":
return types.High
case "critical":
return types.Critical
}
log.Warning("Could not determine a vulnerability priority from: %s", priority)
return types.Unknown
}

View File

@ -0,0 +1,63 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package fetchers
import (
"os"
"path"
"runtime"
"testing"
"github.com/coreos/quay-sec/database"
"github.com/coreos/quay-sec/utils/types"
"github.com/stretchr/testify/assert"
)
func TestUbuntuParser(t *testing.T) {
_, filename, _, _ := runtime.Caller(0)
path := path.Join(path.Dir(filename))
// Test parsing testdata/fetcher_
testData, _ := os.Open(path + "/testdata/fetcher_ubuntu_test.txt")
defer testData.Close()
vulnerability, unknownReleases, err := parseUbuntuCVE(testData)
if assert.Nil(t, err) {
assert.Equal(t, "CVE-2015-4471", vulnerability.ID)
assert.Equal(t, types.Medium, vulnerability.Priority)
assert.Equal(t, "Off-by-one error in the lzxd_decompress function in lzxd.c in libmspack before 0.5 allows remote attackers to cause a denial of service (buffer under-read and application crash) via a crafted CAB archive.", vulnerability.Description)
// Unknown release (line 28)
_, hasUnkownRelease := unknownReleases["unknown"]
assert.True(t, hasUnkownRelease)
if assert.Len(t, vulnerability.FixedIn, 3) {
assert.Contains(t, vulnerability.FixedIn, &database.Package{
OS: "ubuntu:14.04",
Name: "libmspack",
Version: types.MaxVersion,
})
assert.Contains(t, vulnerability.FixedIn, &database.Package{
OS: "ubuntu:15.04",
Name: "libmspack",
Version: types.NewVersionUnsafe("0.4-3"),
})
assert.Contains(t, vulnerability.FixedIn, &database.Package{
OS: "ubuntu:15.10",
Name: "libmspack-anotherpkg",
Version: types.NewVersionUnsafe("0.1"),
})
}
}
}

286
updater/updater.go Normal file
View File

@ -0,0 +1,286 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package updater updates the vulnerability database periodically using
// the registered vulnerability fetchers.
package updater
import (
"math/rand"
"strconv"
"time"
"github.com/coreos/pkg/capnslog"
"github.com/coreos/quay-sec/database"
"github.com/coreos/quay-sec/health"
"github.com/coreos/quay-sec/utils"
"github.com/pborman/uuid"
)
const (
flagName = "updater"
refreshLockDuration = time.Minute * 8
lockDuration = refreshLockDuration + time.Minute*2
// healthMaxConsecutiveLocalFailures defines the number of times the updater
// can fail before we should tag it as unhealthy
healthMaxConsecutiveLocalFailures = 5
)
var (
log = capnslog.NewPackageLogger("github.com/coreos/quay-sec", "updater")
healthLatestSuccessfulUpdate time.Time
healthLockOwner string
healthIdentifier string
healthConsecutiveLocalFailures int
healthNotes []string
)
func init() {
health.RegisterHealthchecker("updater", Healthcheck)
}
// Run updates the vulnerability database at regular intervals
func Run(interval time.Duration, st *utils.Stopper) {
defer st.End()
// Do not run the updater if the interval is 0
if interval == 0 {
log.Infof("updater service is disabled.")
return
}
whoAmI := uuid.New()
healthIdentifier = whoAmI
log.Infof("updater service started. lock identifier: %s", whoAmI)
for {
// Set the next update time to (last update time + interval) or now if there
// is no last update time stored in database (first update) or if an error
// occurs
nextUpdate := time.Now().UTC()
if lastUpdateTSS, err := database.GetFlagValue(flagName); err == nil && lastUpdateTSS != "" {
if lastUpdateTS, err := strconv.ParseInt(lastUpdateTSS, 10, 64); err == nil {
healthLatestSuccessfulUpdate = time.Unix(lastUpdateTS, 0)
nextUpdate = time.Unix(lastUpdateTS, 0).Add(interval)
}
}
// If the next update timer is in the past, then try to update.
if nextUpdate.Before(time.Now().UTC()) {
// Attempt to get a lock on the the update.
log.Debug("attempting to obtain update lock")
hasLock, hasLockUntil := database.Lock(flagName, lockDuration, whoAmI)
if hasLock {
healthLockOwner = healthIdentifier
// Launch update in a new go routine.
doneC := make(chan bool, 1)
go func() {
Update()
doneC <- true
}()
// Refresh the lock until the update is done.
for done := false; !done; {
select {
case <-doneC:
done = true
case <-time.After(refreshLockDuration):
database.Lock(flagName, lockDuration, whoAmI)
}
}
// Write the last update time to the database and set the next update
// time.
now := time.Now().UTC()
database.UpdateFlag(flagName, strconv.FormatInt(now.Unix(), 10))
healthLatestSuccessfulUpdate = now
nextUpdate = now.Add(interval)
// Unlock the update.
database.Unlock(flagName, whoAmI)
} else {
lockOwner, lockExpiration, err := database.LockInfo(flagName)
if err != nil {
log.Debug("update lock is already taken")
nextUpdate = hasLockUntil
} else {
log.Debugf("update lock is already taken by %s until %v", lockOwner, lockExpiration)
nextUpdate = lockExpiration
healthLockOwner = lockOwner
}
}
}
// Sleep, but remain stoppable until approximately the next update time.
now := time.Now().UTC()
waitUntil := nextUpdate.Add(time.Duration(rand.ExpFloat64()/0.5) * time.Second)
log.Debugf("next update attempt scheduled for %v.", waitUntil)
if !waitUntil.Before(now) {
if !st.Sleep(waitUntil.Sub(time.Now())) {
break
}
}
}
log.Info("updater service stopped")
}
// Update fetches all the vulnerabilities from the registered fetchers, upserts
// them into the database and then sends notifications.
func Update() {
log.Info("updating vulnerabilities")
// Fetch updates in parallel.
var status = true
var responseC = make(chan *FetcherResponse, 0)
for n, f := range fetchers {
go func(name string, fetcher Fetcher) {
response, err := fetcher.FetchUpdate()
if err != nil {
log.Errorf("an error occured when fetching update '%s': %s.", name, err)
status = false
responseC <- nil
return
}
responseC <- &response
}(n, f)
}
// Collect results of updates.
var responses []*FetcherResponse
var notes []string
for i := 0; i < len(fetchers); {
select {
case resp := <-responseC:
if resp != nil {
responses = append(responses, resp)
notes = append(notes, resp.Notes...)
}
i++
}
}
close(responseC)
// TODO(Quentin-M): Merge responses together
// TODO(Quentin-M): Complete informations using NVD
// Store flags out of the response struct.
flags := make(map[string]string)
for _, response := range responses {
if response.FlagName != "" && response.FlagValue != "" {
flags[response.FlagName] = response.FlagValue
}
}
// Update health notes.
healthNotes = notes
// Build list of packages.
var packages []*database.Package
for _, response := range responses {
for _, v := range response.Vulnerabilities {
packages = append(packages, v.FixedIn...)
}
}
// Insert packages into the database.
log.Tracef("beginning insertion of %d packages for update", len(packages))
t := time.Now()
err := database.InsertPackages(packages)
log.Tracef("inserting %d packages took %v", len(packages), time.Since(t))
if err != nil {
log.Errorf("an error occured when inserting packages for update: %s", err)
updateHealth(false)
return
}
packages = nil
// Build a list of vulnerabilties.
var vulnerabilities []*database.Vulnerability
for _, response := range responses {
for _, v := range response.Vulnerabilities {
var packageNodes []string
for _, pkg := range v.FixedIn {
packageNodes = append(packageNodes, pkg.Node)
}
vulnerabilities = append(vulnerabilities, &database.Vulnerability{ID: v.ID, Link: v.Link, Priority: v.Priority, Description: v.Description, FixedInNodes: packageNodes})
}
}
responses = nil
// Insert vulnerabilities into the database.
log.Tracef("beginning insertion of %d vulnerabilities for update", len(vulnerabilities))
t = time.Now()
notifications, err := database.InsertVulnerabilities(vulnerabilities)
log.Tracef("inserting %d vulnerabilities took %v", len(vulnerabilities), time.Since(t))
if err != nil {
log.Errorf("an error occured when inserting vulnerabilities for update: %s", err)
updateHealth(false)
return
}
vulnerabilities = nil
// Insert notifications into the database.
err = database.InsertNotifications(notifications, database.GetDefaultNotificationWrapper())
if err != nil {
log.Errorf("an error occured when inserting notifications for update: %s", err)
updateHealth(false)
return
}
notifications = nil
// Update flags in the database.
for flagName, flagValue := range flags {
database.UpdateFlag(flagName, flagValue)
}
// Update health depending on the status of the fetchers.
updateHealth(status)
log.Info("update finished")
}
func updateHealth(s bool) {
if s == false {
healthConsecutiveLocalFailures++
} else {
healthConsecutiveLocalFailures = 0
}
}
// Healthcheck returns the health of the updater service.
func Healthcheck() health.Status {
return health.Status{
IsEssential: false,
IsHealthy: healthConsecutiveLocalFailures < healthMaxConsecutiveLocalFailures,
Details: struct {
HealthIdentifier string
HealthLockOwner string
LatestSuccessfulUpdate time.Time
ConsecutiveLocalFailures int
Notes []string `json:",omitempty"`
}{
HealthIdentifier: healthIdentifier,
HealthLockOwner: healthLockOwner,
LatestSuccessfulUpdate: healthLatestSuccessfulUpdate,
ConsecutiveLocalFailures: healthConsecutiveLocalFailures,
Notes: healthNotes,
},
}
}

41
utils/errors/errors.go Normal file
View File

@ -0,0 +1,41 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package errors defines error types that are used in several modules
package errors
import "errors"
var (
// ErrFilesystem occurs when a filesystem interaction fails.
ErrFilesystem = errors.New("something went wrong when interacting with the fs")
// ErrCouldNotDownload occurs when a download fails.
ErrCouldNotDownload = errors.New("could not download requested ressource")
// ErrNotFound occurs when a resource could not be found.
ErrNotFound = errors.New("the resource cannot be found")
)
// ErrBadRequest occurs when a method has been passed an inappropriate argument.
type ErrBadRequest struct {
s string
}
// NewBadRequestError instantiates a ErrBadRequest with the specified message.
func NewBadRequestError(message string) error {
return &ErrBadRequest{s: message}
}
func (e *ErrBadRequest) Error() string {
return e.s
}

39
utils/exec.go Normal file
View File

@ -0,0 +1,39 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package utils simply defines utility functions and types.
package utils
import (
"bytes"
"os/exec"
)
// Exec runs the given binary with arguments
func Exec(dir string, bin string, args ...string) ([]byte, error) {
_, err := exec.LookPath(bin)
if err != nil {
return nil, err
}
cmd := exec.Command(bin, args...)
cmd.Dir = dir
var buf bytes.Buffer
cmd.Stdout = &buf
cmd.Stderr = &buf
err = cmd.Run()
return buf.Bytes(), err
}

65
utils/stopper.go Normal file
View File

@ -0,0 +1,65 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package utils
import (
"sync"
"time"
)
// Stopper eases the graceful termination of a group of goroutines
type Stopper struct {
wg sync.WaitGroup
stop chan struct{}
}
// NewStopper initializes a new Stopper instance
func NewStopper() *Stopper {
return &Stopper{stop: make(chan struct{}, 0)}
}
// Begin indicates that a new goroutine has started.
func (s *Stopper) Begin() {
s.wg.Add(1)
}
// End indicates that a goroutine has stopped.
func (s *Stopper) End() {
s.wg.Done()
}
// Chan returns the channel on which goroutines could listen to determine if
// they should stop. The channel is closed when Stop() is called.
func (s *Stopper) Chan() chan struct{} {
return s.stop
}
// Sleep puts the current goroutine on sleep during a duration d
// Sleep could be interrupted in the case the goroutine should stop itself,
// in which case Sleep returns false.
func (s *Stopper) Sleep(d time.Duration) bool {
select {
case <-time.After(d):
return true
case <-s.stop:
return false
}
}
// Stop asks every goroutine to end.
func (s *Stopper) Stop() {
close(s.stop)
s.wg.Wait()
}

68
utils/string.go Normal file
View File

@ -0,0 +1,68 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package utils
import (
"crypto/sha1"
"encoding/hex"
"regexp"
)
var urlParametersRegexp = regexp.MustCompile(`(\?|\&)([^=]+)\=([^ &]+)`)
// Hash returns an unique hash of the given string
func Hash(str string) string {
h := sha1.New()
h.Write([]byte(str))
bs := h.Sum(nil)
return hex.EncodeToString(bs)
}
// CleanURL removes all parameters from an URL
func CleanURL(str string) string {
return urlParametersRegexp.ReplaceAllString(str, "")
}
// Contains looks for a string into an array of strings and returns whether
// the string exists
func Contains(needle string, haystack []string) bool {
for _, h := range haystack {
if h == needle {
return true
}
}
return false
}
// CompareStringLists returns the strings which are present in X but not in Y
func CompareStringLists(X, Y []string) []string {
m := make(map[string]int)
for _, y := range Y {
m[y] = 1
}
diff := []string{}
for _, x := range X {
if m[x] > 0 {
continue
}
diff = append(diff, x)
m[x] = 1
}
return diff
}

107
utils/tar.go Normal file
View File

@ -0,0 +1,107 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package utils
import (
"archive/tar"
"bufio"
"bytes"
"compress/gzip"
"errors"
"io"
"io/ioutil"
"strings"
)
var (
// ErrCouldNotExtract occurs when an extraction fails.
ErrCouldNotExtract = errors.New("utils: could not extract the archive")
// ErrExtractedFileTooBig occurs when a file to extract is too big.
ErrExtractedFileTooBig = errors.New("utils: could not extract one or more files from the archive: file too big")
gzipHeader = []byte{0x1f, 0x8b}
)
// SelectivelyExtractArchive extracts the specified files and folders
// from targz data read from the given reader and store them in a map indexed by file paths
func SelectivelyExtractArchive(r io.Reader, toExtract []string, maxFileSize int64) (map[string][]byte, error) {
data := make(map[string][]byte)
// Create a tar or tar/tar-gzip reader
tr, err := getTarReader(r)
if err != nil {
return data, ErrCouldNotExtract
}
// For each element in the archive
for {
hdr, err := tr.Next()
if err == io.EOF {
break
}
if err != nil {
return data, ErrCouldNotExtract
}
// Get element filename
filename := hdr.Name
filename = strings.TrimPrefix(filename, "./")
// Determine if we should extract the element
toBeExtracted := false
for _, s := range toExtract {
if strings.HasPrefix(filename, s) {
toBeExtracted = true
break
}
}
if toBeExtracted {
// File size limit
if maxFileSize > 0 && hdr.Size > maxFileSize {
return data, ErrExtractedFileTooBig
}
// Extract the element
if hdr.Typeflag == tar.TypeSymlink || hdr.Typeflag == tar.TypeLink || hdr.Typeflag == tar.TypeReg {
d, _ := ioutil.ReadAll(tr)
data[filename] = d
}
}
}
return data, nil
}
// getTarReader returns a tar.Reader associated with the specified io.Reader,
// optionally backed by a gzip.Reader if gzip compression is detected.
//
// Gzip detection is done by using the magic numbers defined in the RFC1952 :
// the first two bytes should be 0x1f and 0x8b..
func getTarReader(r io.Reader) (*tar.Reader, error) {
br := bufio.NewReader(r)
header, err := br.Peek(2)
if err == nil && bytes.Equal(header, gzipHeader) {
gr, err := gzip.NewReader(br)
if err != nil {
return nil, err
}
return tar.NewReader(gr), nil
}
return tar.NewReader(br), nil
}

BIN
utils/testdata/utils_test.tar vendored Normal file

Binary file not shown.

BIN
utils/testdata/utils_test.tar.gz vendored Normal file

Binary file not shown.

88
utils/types/priority.go Normal file
View File

@ -0,0 +1,88 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package types defines useful types that are used in database models.
package types
// Priority defines a vulnerability priority
type Priority string
const (
// Unknown is either a security problem that has not been
// assigned to a priority yet or a priority that our system
// did not recognize
Unknown Priority = "Unknown"
// Negligible is technically a security problem, but is
// only theoretical in nature, requires a very special
// situation, has almost no install base, or does no real
// damage. These tend not to get backport from upstreams,
// and will likely not be included in security updates unless
// there is an easy fix and some other issue causes an update.
Negligible Priority = "Negligible"
// Low is a security problem, but is hard to
// exploit due to environment, requires a user-assisted
// attack, a small install base, or does very little damage.
// These tend to be included in security updates only when
// higher priority issues require an update, or if many
// low priority issues have built up.
Low Priority = "Low"
// Medium is a real security problem, and is exploitable
// for many people. Includes network daemon denial of service
// attacks, cross-site scripting, and gaining user privileges.
// Updates should be made soon for this priority of issue.
Medium Priority = "Medium"
// High is a real problem, exploitable for many people in a default
// installation. Includes serious remote denial of services,
// local root privilege escalations, or data loss.
High Priority = "High"
// Critical is a world-burning problem, exploitable for nearly all people
// in a default installation of Linux. Includes remote root
// privilege escalations, or massive data loss.
Critical Priority = "Critical"
// Defcon1 is a Critical problem which has been manually highlighted by
// the team. It requires an immediate attention.
Defcon1 Priority = "Defcon1"
)
// Priorities lists all known priorities, ordered from lower to higher
var Priorities = []Priority{Unknown, Negligible, Low, Medium, High, Critical, Defcon1}
// IsValid determines if the priority is a valid one
func (p Priority) IsValid() bool {
for _, pp := range Priorities {
if p == pp {
return true
}
}
return false
}
// Compare compares two priorities
func (p Priority) Compare(p2 Priority) int {
var i1, i2 int
for i1 = 0; i1 < len(Priorities); i1 = i1 + 1 {
if p == Priorities[i1] {
break
}
}
for i2 = 0; i2 < len(Priorities); i2 = i2 + 1 {
if p2 == Priorities[i2] {
break
}
}
return i1 - i2
}

View File

@ -0,0 +1,32 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package types
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestComparePriority(t *testing.T) {
assert.Equal(t, Medium.Compare(Medium), 0, "Priority comparison failed")
assert.True(t, Medium.Compare(High) < 0, "Priority comparison failed")
assert.True(t, Critical.Compare(Low) > 0, "Priority comparison failed")
}
func TestIsValid(t *testing.T) {
assert.False(t, Priority("Test").IsValid())
assert.True(t, Unknown.IsValid())
}

282
utils/types/version.go Normal file
View File

@ -0,0 +1,282 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package types
import (
"encoding/json"
"errors"
"strconv"
"strings"
"unicode"
)
// Version represents a package version
type Version struct {
epoch int
version string
revision string
}
var (
// MinVersion is a special package version which is always sorted first
MinVersion = Version{version: "#MINV#"}
// MaxVersion is a special package version which is always sorted last
MaxVersion = Version{version: "#MAXV#"}
versionAllowedSymbols = []rune{'.', '-', '+', '~', ':', '_'}
revisionAllowedSymbols = []rune{'.', '+', '~', '_'}
)
// NewVersion function parses a string into a Version struct which can be compared
//
// The implementation is based on http://man.he.net/man5/deb-version
// on https://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Version
//
// It uses the dpkg-1.17.25's algorithm (lib/parsehelp.c)
func NewVersion(str string) (Version, error) {
var version Version
// Trim leading and trailing space
str = strings.TrimSpace(str)
if len(str) == 0 {
return Version{}, errors.New("Version string is empty")
}
// Max/Min versions
if str == MaxVersion.String() {
return MaxVersion, nil
}
if str == MinVersion.String() {
return MinVersion, nil
}
// Find epoch
sepepoch := strings.Index(str, ":")
if sepepoch > -1 {
intepoch, err := strconv.Atoi(str[:sepepoch])
if err == nil {
version.epoch = intepoch
} else {
return Version{}, errors.New("epoch in version is not a number")
}
if intepoch < 0 {
return Version{}, errors.New("epoch in version is negative")
}
} else {
version.epoch = 0
}
// Find version / revision
seprevision := strings.LastIndex(str, "-")
if seprevision > -1 {
version.version = str[sepepoch+1 : seprevision]
version.revision = str[seprevision+1:]
} else {
version.version = str[sepepoch+1:]
version.revision = ""
}
// Verify format
if len(version.version) == 0 {
return Version{}, errors.New("No version")
}
if !unicode.IsDigit(rune(version.version[0])) {
return Version{}, errors.New("version does not start with digit")
}
for i := 0; i < len(version.version); i = i + 1 {
r := rune(version.version[i])
if !unicode.IsDigit(r) && !unicode.IsLetter(r) && !containsRune(versionAllowedSymbols, r) {
return Version{}, errors.New("invalid character in version")
}
}
for i := 0; i < len(version.revision); i = i + 1 {
r := rune(version.revision[i])
if !unicode.IsDigit(r) && !unicode.IsLetter(r) && !containsRune(revisionAllowedSymbols, r) {
return Version{}, errors.New("invalid character in revision")
}
}
return version, nil
}
// NewVersionUnsafe is just a wrapper around NewVersion that ignore potentiel
// parsing error. Useful for test purposes
func NewVersionUnsafe(str string) Version {
v, _ := NewVersion(str)
return v
}
// Compare function compares two Debian-like package version
//
// The implementation is based on http://man.he.net/man5/deb-version
// on https://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Version
//
// It uses the dpkg-1.17.25's algorithm (lib/version.c)
func (a Version) Compare(b Version) int {
// Quick check
if a == b {
return 0
}
// Max/Min comparison
if a == MinVersion || b == MaxVersion {
return -1
}
if b == MinVersion || a == MaxVersion {
return 1
}
// Compare epochs
if a.epoch > b.epoch {
return 1
}
if a.epoch < b.epoch {
return -1
}
// Compare version
rc := verrevcmp(a.version, b.version)
if rc != 0 {
return signum(rc)
}
// Compare revision
return signum(verrevcmp(a.revision, b.revision))
}
// String returns the string representation of a Version
func (v Version) String() (s string) {
if v.epoch != 0 {
s = strconv.Itoa(v.epoch) + ":"
}
s += v.version
if v.revision != "" {
s += "-" + v.revision
}
return
}
func (v Version) MarshalJSON() ([]byte, error) {
return json.Marshal(v.String())
}
func (v *Version) UnmarshalJSON(b []byte) (err error) {
var str string
json.Unmarshal(b, &str)
vp, err := NewVersion(str)
*v = vp
return
}
func verrevcmp(t1, t2 string) int {
t1, rt1 := nextRune(t1)
t2, rt2 := nextRune(t2)
for rt1 != nil || rt2 != nil {
firstDiff := 0
for (rt1 != nil && !unicode.IsDigit(*rt1)) || (rt2 != nil && !unicode.IsDigit(*rt2)) {
ac := 0
bc := 0
if rt1 != nil {
ac = order(*rt1)
}
if rt2 != nil {
bc = order(*rt2)
}
if ac != bc {
return ac - bc
}
t1, rt1 = nextRune(t1)
t2, rt2 = nextRune(t2)
}
for rt1 != nil && *rt1 == '0' {
t1, rt1 = nextRune(t1)
}
for rt2 != nil && *rt2 == '0' {
t2, rt2 = nextRune(t2)
}
for rt1 != nil && unicode.IsDigit(*rt1) && rt2 != nil && unicode.IsDigit(*rt2) {
if firstDiff == 0 {
firstDiff = int(*rt1) - int(*rt2)
}
t1, rt1 = nextRune(t1)
t2, rt2 = nextRune(t2)
}
if rt1 != nil && unicode.IsDigit(*rt1) {
return 1
}
if rt2 != nil && unicode.IsDigit(*rt2) {
return -1
}
if firstDiff != 0 {
return firstDiff
}
}
return 0
}
// order compares runes using a modified ASCII table
// so that letters are sorted earlier than non-letters
// and so that tildes sorts before anything
func order(r rune) int {
if unicode.IsDigit(r) {
return 0
}
if unicode.IsLetter(r) {
return int(r)
}
if r == '~' {
return -1
}
return int(r) + 256
}
func nextRune(str string) (string, *rune) {
if len(str) >= 1 {
r := rune(str[0])
return str[1:], &r
}
return str, nil
}
func containsRune(s []rune, e rune) bool {
for _, a := range s {
if a == e {
return true
}
}
return false
}
func signum(a int) int {
switch {
case a < 0:
return -1
case a > 0:
return +1
}
return 0
}

243
utils/types/version_test.go Normal file
View File

@ -0,0 +1,243 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package types
import (
"strings"
"testing"
"github.com/stretchr/testify/assert"
)
const (
LESS = -1
EQUAL = 0
GREATER = 1
)
func TestCompareSimpleVersion(t *testing.T) {
cases := []struct {
v1 Version
expected int
v2 Version
}{
{Version{}, EQUAL, Version{}},
{Version{epoch: 1}, LESS, Version{epoch: 2}},
{Version{epoch: 0, version: "1", revision: "1"}, LESS, Version{epoch: 0, version: "2", revision: "1"}},
{Version{epoch: 0, version: "a", revision: "0"}, LESS, Version{epoch: 0, version: "b", revision: "0"}},
{Version{epoch: 0, version: "1", revision: "1"}, LESS, Version{epoch: 0, version: "1", revision: "2"}},
{Version{epoch: 0, version: "0", revision: "0"}, EQUAL, Version{epoch: 0, version: "0", revision: "0"}},
{Version{epoch: 0, version: "0", revision: "00"}, EQUAL, Version{epoch: 0, version: "00", revision: "0"}},
{Version{epoch: 1, version: "2", revision: "3"}, EQUAL, Version{epoch: 1, version: "2", revision: "3"}},
{Version{epoch: 0, version: "0", revision: "a"}, LESS, Version{epoch: 0, version: "0", revision: "b"}},
{MinVersion, LESS, MaxVersion},
{MinVersion, LESS, Version{}},
{MinVersion, LESS, Version{version: "0"}},
{MaxVersion, GREATER, Version{}},
{MaxVersion, GREATER, Version{epoch: 9999999, version: "9999999", revision: "9999999"}},
}
for _, c := range cases {
cmp := c.v1.Compare(c.v2)
assert.Equal(t, c.expected, cmp, "%s vs. %s, = %d, expected %d", c.v1, c.v2, cmp, c.expected)
cmp = c.v2.Compare(c.v1)
assert.Equal(t, -c.expected, cmp, "%s vs. %s, = %d, expected %d", c.v2, c.v1, cmp, -c.expected)
}
}
func TestParse(t *testing.T) {
cases := []struct {
str string
ver Version
err bool
}{
// Test 0
{"0", Version{epoch: 0, version: "0", revision: ""}, false},
{"0:0", Version{epoch: 0, version: "0", revision: ""}, false},
{"0:0-", Version{epoch: 0, version: "0", revision: ""}, false},
{"0:0-0", Version{epoch: 0, version: "0", revision: "0"}, false},
{"0:0.0-0.0", Version{epoch: 0, version: "0.0", revision: "0.0"}, false},
// Test epoched
{"1:0", Version{epoch: 1, version: "0", revision: ""}, false},
{"5:1", Version{epoch: 5, version: "1", revision: ""}, false},
// Test multiple hypens
{"0:0-0-0", Version{epoch: 0, version: "0-0", revision: "0"}, false},
{"0:0-0-0-0", Version{epoch: 0, version: "0-0-0", revision: "0"}, false},
// Test multiple colons
{"0:0:0-0", Version{epoch: 0, version: "0:0", revision: "0"}, false},
{"0:0:0:0-0", Version{epoch: 0, version: "0:0:0", revision: "0"}, false},
// Test multiple hyphens and colons
{"0:0:0-0-0", Version{epoch: 0, version: "0:0-0", revision: "0"}, false},
{"0:0-0:0-0", Version{epoch: 0, version: "0-0:0", revision: "0"}, false},
// Test valid characters in version
{"0:09azAZ.-+~:_-0", Version{epoch: 0, version: "09azAZ.-+~:_", revision: "0"}, false},
// Test valid characters in debian revision
{"0:0-azAZ09.+~_", Version{epoch: 0, version: "0", revision: "azAZ09.+~_"}, false},
// Test version with leading and trailing spaces
{" 0:0-1", Version{epoch: 0, version: "0", revision: "1"}, false},
{"0:0-1 ", Version{epoch: 0, version: "0", revision: "1"}, false},
{" 0:0-1 ", Version{epoch: 0, version: "0", revision: "1"}, false},
// Test empty version
{"", Version{}, true},
{" ", Version{}, true},
{"0:", Version{}, true},
// Test version with embedded spaces
{"0:0 0-1", Version{}, true},
// Test version with negative epoch
{"-1:0-1", Version{}, true},
// Test invalid characters in epoch
{"a:0-0", Version{}, true},
{"A:0-0", Version{}, true},
// Test version not starting with a digit
{"0:abc3-0", Version{}, true},
}
for _, c := range cases {
v, err := NewVersion(c.str)
if c.err {
assert.Error(t, err, "When parsing '%s'", c.str)
} else {
assert.Nil(t, err, "When parsing '%s'", c.str)
}
assert.Equal(t, c.ver, v, "When parsing '%s'", c.str)
}
// Test invalid characters in version
versym := []rune{'!', '#', '@', '$', '%', '&', '/', '|', '\\', '<', '>', '(', ')', '[', ']', '{', '}', ';', ',', '=', '*', '^', '\''}
for _, r := range versym {
_, err := NewVersion(strings.Join([]string{"0:0", string(r), "-0"}, ""))
assert.Error(t, err, "Parsing with invalid character '%s' in version should have failed", string(r))
}
// Test invalid characters in revision
versym = []rune{'!', '#', '@', '$', '%', '&', '/', '|', '\\', '<', '>', '(', ')', '[', ']', '{', '}', ':', ';', ',', '=', '*', '^', '\''}
for _, r := range versym {
_, err := NewVersion(strings.Join([]string{"0:0-", string(r)}, ""))
assert.Error(t, err, "Parsing with invalid character '%s' in revision should have failed", string(r))
}
}
func TestParseAndCompare(t *testing.T) {
const LESS = -1
const EQUAL = 0
const GREATER = 1
cases := []struct {
v1 string
expected int
v2 string
}{
{"7.6p2-4", GREATER, "7.6-0"},
{"1.0.3-3", GREATER, "1.0-1"},
{"1.3", GREATER, "1.2.2-2"},
{"1.3", GREATER, "1.2.2"},
// Some properties of text strings
{"0-pre", EQUAL, "0-pre"},
{"0-pre", LESS, "0-pree"},
{"1.1.6r2-2", GREATER, "1.1.6r-1"},
{"2.6b2-1", GREATER, "2.6b-2"},
{"98.1p5-1", LESS, "98.1-pre2-b6-2"},
{"0.4a6-2", GREATER, "0.4-1"},
{"1:3.0.5-2", LESS, "1:3.0.5.1"},
// epochs
{"1:0.4", GREATER, "10.3"},
{"1:1.25-4", LESS, "1:1.25-8"},
{"0:1.18.36", EQUAL, "1.18.36"},
{"1.18.36", GREATER, "1.18.35"},
{"0:1.18.36", GREATER, "1.18.35"},
// Funky, but allowed, characters in upstream version
{"9:1.18.36:5.4-20", LESS, "10:0.5.1-22"},
{"9:1.18.36:5.4-20", LESS, "9:1.18.36:5.5-1"},
{"9:1.18.36:5.4-20", LESS, " 9:1.18.37:4.3-22"},
{"1.18.36-0.17.35-18", GREATER, "1.18.36-19"},
// Junk
{"1:1.2.13-3", LESS, "1:1.2.13-3.1"},
{"2.0.7pre1-4", LESS, "2.0.7r-1"},
// if a version includes a dash, it should be the debrev dash - policy says so
{"0:0-0-0", GREATER, "0-0"},
// do we like strange versions? Yes we like strange versions…
{"0", EQUAL, "0"},
{"0", EQUAL, "00"},
// #205960
{"3.0~rc1-1", LESS, "3.0-1"},
// #573592 - debian policy 5.6.12
{"1.0", EQUAL, "1.0-0"},
{"0.2", LESS, "1.0-0"},
{"1.0", LESS, "1.0-0+b1"},
{"1.0", GREATER, "1.0-0~"},
// "steal" the testcases from (old perl) cupt
{"1.2.3", EQUAL, "1.2.3"}, // identical
{"4.4.3-2", EQUAL, "4.4.3-2"}, // identical
{"1:2ab:5", EQUAL, "1:2ab:5"}, // this is correct...
{"7:1-a:b-5", EQUAL, "7:1-a:b-5"}, // and this
{"57:1.2.3abYZ+~-4-5", EQUAL, "57:1.2.3abYZ+~-4-5"}, // and those too
{"1.2.3", EQUAL, "0:1.2.3"}, // zero epoch
{"1.2.3", EQUAL, "1.2.3-0"}, // zero revision
{"009", EQUAL, "9"}, // zeroes…
{"009ab5", EQUAL, "9ab5"}, // there as well
{"1.2.3", LESS, "1.2.3-1"}, // added non-zero revision
{"1.2.3", LESS, "1.2.4"}, // just bigger
{"1.2.4", GREATER, "1.2.3"}, // order doesn't matter
{"1.2.24", GREATER, "1.2.3"}, // bigger, eh?
{"0.10.0", GREATER, "0.8.7"}, // bigger, eh?
{"3.2", GREATER, "2.3"}, // major number rocks
{"1.3.2a", GREATER, "1.3.2"}, // letters rock
{"0.5.0~git", LESS, "0.5.0~git2"}, // numbers rock
{"2a", LESS, "21"}, // but not in all places
{"1.3.2a", LESS, "1.3.2b"}, // but there is another letter
{"1:1.2.3", GREATER, "1.2.4"}, // epoch rocks
{"1:1.2.3", LESS, "1:1.2.4"}, // bigger anyway
{"1.2a+~bCd3", LESS, "1.2a++"}, // tilde doesn't rock
{"1.2a+~bCd3", GREATER, "1.2a+~"}, // but first is longer!
{"5:2", GREATER, "304-2"}, // epoch rocks
{"5:2", LESS, "304:2"}, // so big epoch?
{"25:2", GREATER, "3:2"}, // 25 > 3, obviously
{"1:2:123", LESS, "1:12:3"}, // 12 > 2
{"1.2-5", LESS, "1.2-3-5"}, // 1.2 < 1.2-3
{"5.10.0", GREATER, "5.005"}, // preceding zeroes don't matters
{"3a9.8", LESS, "3.10.2"}, // letters are before all letter symbols
{"3a9.8", GREATER, "3~10"}, // but after the tilde
{"1.4+OOo3.0.0~", LESS, "1.4+OOo3.0.0-4"}, // another tilde check
{"2.4.7-1", LESS, "2.4.7-z"}, // revision comparing
{"1.002-1+b2", GREATER, "1.00"}, // whatever...
}
for _, c := range cases {
v1, err1 := NewVersion(c.v1)
v2, err2 := NewVersion(c.v2)
if assert.Nil(t, err1) && assert.Nil(t, err2) {
cmp := v1.Compare(v2)
assert.Equal(t, c.expected, cmp, "%s vs. %s, = %d, expected %d", c.v1, c.v2, cmp, c.expected)
cmp = v2.Compare(v1)
assert.Equal(t, -c.expected, cmp, "%s vs. %s, = %d, expected %d", c.v2, c.v1, cmp, -c.expected)
}
}
}
func TestVersionJson(t *testing.T) {
v, _ := NewVersion("57:1.2.3abYZ+~-4-5")
// Marshal
json, err := v.MarshalJSON()
assert.Nil(t, err)
assert.Equal(t, "\""+v.String()+"\"", string(json))
// Unmarshal
var v2 Version
v2.UnmarshalJSON(json)
assert.Equal(t, v, v2)
}

96
utils/utils_test.go Normal file
View File

@ -0,0 +1,96 @@
// Copyright 2015 quay-sec authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package utils
import (
"bytes"
"os"
"path"
"runtime"
"testing"
"github.com/pborman/uuid"
"github.com/stretchr/testify/assert"
)
const fileToDownload = "http://www.google.com/robots.txt"
// TestDiff tests the diff.go source file
func TestDiff(t *testing.T) {
assert.NotContains(t, CompareStringLists([]string{"a", "b", "a"}, []string{"a", "c"}), "a")
}
// TestExec tests the exec.go source file
func TestExec(t *testing.T) {
_, err := Exec(uuid.New(), "touch", uuid.New())
assert.Error(t, err, "Exec should not be able to run in a not existing directory")
o, err := Exec("/tmp", "echo", "test")
assert.Nil(t, err, "Could not exec echo")
assert.Equal(t, "test\n", string(o), "Could not exec echo")
_, err = Exec("/tmp", uuid.New())
assert.Error(t, err, "An invalid command should return an error")
}
// TestString tests the string.go file
func TestString(t *testing.T) {
assert.Equal(t, Hash("abc123"), Hash("abc123"))
assert.NotEqual(t, Hash("abc123."), Hash("abc123"))
assert.False(t, Contains("", []string{}))
assert.True(t, Contains("a", []string{"a", "b"}))
assert.False(t, Contains("c", []string{"a", "b"}))
}
// TestTar tests the tar.go file
func TestTar(t *testing.T) {
var err error
var data map[string][]byte
_, filepath, _, _ := runtime.Caller(0)
for _, filename := range []string{"/testdata/utils_test.tar.gz", "/testdata/utils_test.tar"} {
testArchivePath := path.Join(path.Dir(filepath)) + filename
// Extract non compressed data
data, err = SelectivelyExtractArchive(bytes.NewReader([]byte("that string does not represent a tar or tar-gzip file")), []string{}, 0)
assert.Error(t, err, "Extracting non compressed data should return an error")
// Extract an archive
f, _ := os.Open(testArchivePath)
defer f.Close()
data, err = SelectivelyExtractArchive(f, []string{"test/"}, 0)
assert.Nil(t, err)
if c, n := data["test/test.txt"]; !n {
assert.Fail(t, "test/test.txt should have been extracted")
} else {
assert.NotEqual(t, 0, len(c) > 0, "test/test.txt file is empty")
}
if _, n := data["test.txt"]; n {
assert.Fail(t, "test.txt should not be extracted")
}
// File size limit
f, _ = os.Open(testArchivePath)
defer f.Close()
data, err = SelectivelyExtractArchive(f, []string{"test"}, 50)
assert.Equal(t, ErrExtractedFileTooBig, err)
}
}
func TestCleanURL(t *testing.T) {
assert.Equal(t, "Test http://test.cn/test Test", CleanURL("Test http://test.cn/test?foo=bar&bar=foo Test"))
}

4
vendor/github.com/alecthomas/kingpin/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,4 @@
sudo: false
language: go
install: go get -t -v ./...
go: 1.2

19
vendor/github.com/alecthomas/kingpin/COPYING generated vendored Normal file
View File

@ -0,0 +1,19 @@
Copyright (C) 2014 Alec Thomas
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

555
vendor/github.com/alecthomas/kingpin/README.md generated vendored Normal file
View File

@ -0,0 +1,555 @@
# Kingpin - A Go (golang) command line and flag parser [![Build Status](https://travis-ci.org/alecthomas/kingpin.png)](https://travis-ci.org/alecthomas/kingpin)
<!-- MarkdownTOC -->
- [Overview](#overview)
- [Features](#features)
- [User-visible changes between v1 and v2](#user-visible-changes-between-v1-and-v2)
- [Flags can be used at any point after their definition.](#flags-can-be-used-at-any-point-after-their-definition)
- [Short flags can be combined with their parameters](#short-flags-can-be-combined-with-their-parameters)
- [API changes between v1 and v2](#api-changes-between-v1-and-v2)
- [Versions](#versions)
- [V2 is the current stable version](#v2-is-the-current-stable-version)
- [V1 is the OLD stable version](#v1-is-the-old-stable-version)
- [Change History](#change-history)
- [Examples](#examples)
- [Simple Example](#simple-example)
- [Complex Example](#complex-example)
- [Reference Documentation](#reference-documentation)
- [Displaying errors and usage information](#displaying-errors-and-usage-information)
- [Sub-commands](#sub-commands)
- [Custom Parsers](#custom-parsers)
- [Default Values](#default-values)
- [Place-holders in Help](#place-holders-in-help)
- [Consuming all remaining arguments](#consuming-all-remaining-arguments)
- [Custom help](#custom-help)
<!-- /MarkdownTOC -->
## Overview
Kingpin is a [fluent-style](http://en.wikipedia.org/wiki/Fluent_interface),
type-safe command-line parser. It supports flags, nested commands, and
positional arguments.
Install it with:
$ go get gopkg.in/alecthomas/kingpin.v2
It looks like this:
```go
var (
verbose = kingpin.Flag("verbose", "Verbose mode.").Short('v').Bool()
name = kingpin.Arg("name", "Name of user.").Required().String()
)
func main() {
kingpin.Parse()
fmt.Printf("%v, %s\n", *verbose, *name)
}
```
More [examples](https://github.com/alecthomas/kingpin/tree/master/examples) are available.
Second to parsing, providing the user with useful help is probably the most
important thing a command-line parser does. Kingpin tries to provide detailed
contextual help if `--help` is encountered at any point in the command line
(excluding after `--`).
## Features
- Help output that isn't as ugly as sin.
- Fully [customisable help](#custom-help), via Go templates.
- Parsed, type-safe flags (`kingpin.Flag("f", "help").Int()`)
- Parsed, type-safe positional arguments (`kingpin.Arg("a", "help").Int()`).
- Parsed, type-safe, arbitrarily deep commands (`kingpin.Command("c", "help")`).
- Support for required flags and required positional arguments (`kingpin.Flag("f", "").Required().Int()`).
- Support for arbitrarily nested default commands (`command.Default()`).
- Callbacks per command, flag and argument (`kingpin.Command("c", "").Action(myAction)`).
- POSIX-style short flag combining (`-a -b` -> `-ab`).
- Short-flag+parameter combining (`-a parm` -> `-aparm`).
- Read command-line from files (`@<file>`).
- Automatically generate man pages (`--man-page`).
## User-visible changes between v1 and v2
### Flags can be used at any point after their definition.
Flags can be specified at any point after their definition, not just
*immediately after their associated command*. From the chat example below, the
following used to be required:
```
$ chat --server=chat.server.com:8080 post --image=~/Downloads/owls.jpg pics
```
But the following will now work:
```
$ chat post --server=chat.server.com:8080 --image=~/Downloads/owls.jpg pics
```
### Short flags can be combined with their parameters
Previously, if a short flag was used, any argument to that flag would have to
be separated by a space. That is no longer the case.
## API changes between v1 and v2
- `ParseWithFileExpansion()` is gone. The new parser directly supports expanding `@<file>`.
- Added `FatalUsage()` and `FatalUsageContext()` for displaying an error + usage and terminating.
- `Dispatch()` renamed to `Action()`.
- Added `ParseContext()` for parsing a command line into its intermediate context form without executing.
- Added `Terminate()` function to override the termination function.
- Added `UsageForContextWithTemplate()` for printing usage via a custom template.
- Added `UsageTemplate()` for overriding the default template to use. Two templates are included:
1. `DefaultUsageTemplate` - default template.
2. `CompactUsageTemplate` - compact command template for larger applications.
## Versions
Kingpin uses [gopkg.in](https://gopkg.in/alecthomas/kingpin) for versioning.
The current stable version is [gopkg.in/alecthomas/kingpin.v2](https://gopkg.in/alecthomas/kingpin.v2). The previous version, [gopkg.in/alecthomas/kingpin.v1](https://gopkg.in/alecthomas/kingpin.v1), is deprecated and in maintenance mode.
### [V2](https://gopkg.in/alecthomas/kingpin.v2) is the current stable version
Installation:
```sh
$ go get gopkg.in/alecthomas/kingpin.v2
```
### [V1](https://gopkg.in/alecthomas/kingpin.v1) is the OLD stable version
Installation:
```sh
$ go get gopkg.in/alecthomas/kingpin.v1
```
## Change History
- *2015-09-19* -- Stable v2.1.0 release.
- Added `command.Default()` to specify a default command to use if no other
command matches. This allows for convenient user shortcuts.
- Exposed `HelpFlag` and `VersionFlag` for further cusomisation.
- `Action()` and `PreAction()` added and both now support an arbitrary
number of callbacks.
- `kingpin.SeparateOptionalFlagsUsageTemplate`.
- `--help-long` and `--help-man` (hidden by default) flags.
- Flags are "interspersed" by default, but can be disabled with `app.Interspersed(false)`.
- Added flags for all simple builtin types (int8, uint16, etc.) and slice variants.
- Use `app.Writer(os.Writer)` to specify the default writer for all output functions.
- Dropped `os.Writer` prefix from all printf-like functions.
- *2015-05-22* -- Stable v2.0.0 release.
- Initial stable release of v2.0.0.
- Fully supports interspersed flags, commands and arguments.
- Flags can be present at any point after their logical definition.
- Application.Parse() terminates if commands are present and a command is not parsed.
- Dispatch() -> Action().
- Actions are dispatched after all values are populated.
- Override termination function (defaults to os.Exit).
- Override output stream (defaults to os.Stderr).
- Templatised usage help, with default and compact templates.
- Make error/usage functions more consistent.
- Support argument expansion from files by default (with @<file>).
- Fully public data model is available via .Model().
- Parser has been completely refactored.
- Parsing and execution has been split into distinct stages.
- Use `go generate` to generate repeated flags.
- Support combined short-flag+argument: -fARG.
- *2015-01-23* -- Stable v1.3.4 release.
- Support "--" for separating flags from positional arguments.
- Support loading flags from files (ParseWithFileExpansion()). Use @FILE as an argument.
- Add post-app and post-cmd validation hooks. This allows arbitrary validation to be added.
- A bunch of improvements to help usage and formatting.
- Support arbitrarily nested sub-commands.
- *2014-07-08* -- Stable v1.2.0 release.
- Pass any value through to `Strings()` when final argument.
Allows for values that look like flags to be processed.
- Allow `--help` to be used with commands.
- Support `Hidden()` flags.
- Parser for [units.Base2Bytes](https://github.com/alecthomas/units)
type. Allows for flags like `--ram=512MB` or `--ram=1GB`.
- Add an `Enum()` value, allowing only one of a set of values
to be selected. eg. `Flag(...).Enum("debug", "info", "warning")`.
- *2014-06-27* -- Stable v1.1.0 release.
- Bug fixes.
- Always return an error (rather than panicing) when misconfigured.
- `OpenFile(flag, perm)` value type added, for finer control over opening files.
- Significantly improved usage formatting.
- *2014-06-19* -- Stable v1.0.0 release.
- Support [cumulative positional](#consuming-all-remaining-arguments) arguments.
- Return error rather than panic when there are fatal errors not caught by
the type system. eg. when a default value is invalid.
- Use gokpg.in.
- *2014-06-10* -- Place-holder streamlining.
- Renamed `MetaVar` to `PlaceHolder`.
- Removed `MetaVarFromDefault`. Kingpin now uses [heuristics](#place-holders-in-help)
to determine what to display.
## Examples
### Simple Example
Kingpin can be used for simple flag+arg applications like so:
```
$ ping --help
usage: ping [<flags>] <ip> [<count>]
Flags:
--debug Enable debug mode.
--help Show help.
-t, --timeout=5s Timeout waiting for ping.
Args:
<ip> IP address to ping.
[<count>] Number of packets to send
$ ping 1.2.3.4 5
Would ping: 1.2.3.4 with timeout 5s and count 0
```
From the following source:
```go
package main
import (
"fmt"
"gopkg.in/alecthomas/kingpin.v2"
)
var (
debug = kingpin.Flag("debug", "Enable debug mode.").Bool()
timeout = kingpin.Flag("timeout", "Timeout waiting for ping.").Default("5s").OverrideDefaultFromEnvar("PING_TIMEOUT").Short('t').Duration()
ip = kingpin.Arg("ip", "IP address to ping.").Required().IP()
count = kingpin.Arg("count", "Number of packets to send").Int()
)
func main() {
kingpin.Version("0.0.1")
kingpin.Parse()
fmt.Printf("Would ping: %s with timeout %s and count %d", *ip, *timeout, *count)
}
```
### Complex Example
Kingpin can also produce complex command-line applications with global flags,
subcommands, and per-subcommand flags, like this:
```
$ chat --help
usage: chat [<flags>] <command> [<flags>] [<args> ...]
A command-line chat application.
Flags:
--help Show help.
--debug Enable debug mode.
--server=127.0.0.1 Server address.
Commands:
help [<command>]
Show help for a command.
register <nick> <name>
Register a new user.
post [<flags>] <channel> [<text>]
Post a message to a channel.
$ chat help post
usage: chat [<flags>] post [<flags>] <channel> [<text>]
Post a message to a channel.
Flags:
--image=IMAGE Image to post.
Args:
<channel> Channel to post to.
[<text>] Text to post.
$ chat post --image=~/Downloads/owls.jpg pics
...
```
From this code:
```go
package main
import (
"os"
"strings"
"gopkg.in/alecthomas/kingpin.v2"
)
var (
app = kingpin.New("chat", "A command-line chat application.")
debug = app.Flag("debug", "Enable debug mode.").Bool()
serverIP = app.Flag("server", "Server address.").Default("127.0.0.1").IP()
register = app.Command("register", "Register a new user.")
registerNick = register.Arg("nick", "Nickname for user.").Required().String()
registerName = register.Arg("name", "Name of user.").Required().String()
post = app.Command("post", "Post a message to a channel.")
postImage = post.Flag("image", "Image to post.").File()
postChannel = post.Arg("channel", "Channel to post to.").Required().String()
postText = post.Arg("text", "Text to post.").Strings()
)
func main() {
switch kingpin.MustParse(app.Parse(os.Args[1:])) {
// Register user
case register.FullCommand():
println(*registerNick)
// Post message
case post.FullCommand():
if *postImage != nil {
}
text := strings.Join(*postText, " ")
println("Post:", text)
}
}
```
## Reference Documentation
### Displaying errors and usage information
Kingpin exports a set of functions to provide consistent errors and usage
information to the user.
Error messages look something like this:
<app>: error: <message>
The functions on `Application` are:
Function | Purpose
---------|--------------
`Errorf(format, args)` | Display a printf formatted error to the user.
`Fatalf(format, args)` | As with Errorf, but also call the termination handler.
`FatalUsage(format, args)` | As with Fatalf, but also print contextual usage information.
`FatalUsageContext(context, format, args)` | As with Fatalf, but also print contextual usage information from a `ParseContext`.
`FatalIfError(err, format, args)` | Conditionally print an error prefixed with format+args, then call the termination handler
There are equivalent global functions in the kingpin namespace for the default
`kingpin.CommandLine` instance.
### Sub-commands
Kingpin supports nested sub-commands, with separate flag and positional
arguments per sub-command. Note that positional arguments may only occur after
sub-commands.
For example:
```go
var (
deleteCommand = kingpin.Command("delete", "Delete an object.")
deleteUserCommand = deleteCommand.Command("user", "Delete a user.")
deleteUserUIDFlag = deleteUserCommand.Flag("uid", "Delete user by UID rather than username.")
deleteUserUsername = deleteUserCommand.Arg("username", "Username to delete.")
deletePostCommand = deleteCommand.Command("post", "Delete a post.")
)
func main() {
switch kingpin.Parse() {
case "delete user":
case "delete post":
}
}
```
### Custom Parsers
Kingpin supports both flag and positional argument parsers for converting to
Go types. For example, some included parsers are `Int()`, `Float()`,
`Duration()` and `ExistingFile()`.
Parsers conform to Go's [`flag.Value`](http://godoc.org/flag#Value)
interface, so any existing implementations will work.
For example, a parser for accumulating HTTP header values might look like this:
```go
type HTTPHeaderValue http.Header
func (h *HTTPHeaderValue) Set(value string) error {
parts := strings.SplitN(value, ":", 2)
if len(parts) != 2 {
return fmt.Errorf("expected HEADER:VALUE got '%s'", value)
}
(*http.Header)(h).Add(parts[0], parts[1])
return nil
}
func (h *HTTPHeaderValue) String() string {
return ""
}
```
As a convenience, I would recommend something like this:
```go
func HTTPHeader(s Settings) (target *http.Header) {
target = new(http.Header)
s.SetValue((*HTTPHeaderValue)(target))
return
}
```
You would use it like so:
```go
headers = HTTPHeader(kingpin.Flag("header", "Add a HTTP header to the request.").Short('H'))
```
### Default Values
The default value is the zero value for a type. This can be overridden with
the `Default(value)` function on flags and arguments. This function accepts a
string, which is parsed by the value itself, so it *must* be compliant with
the format expected.
### Place-holders in Help
The place-holder value for a flag is the value used in the help to describe
the value of a non-boolean flag.
The value provided to PlaceHolder() is used if provided, then the value
provided by Default() if provided, then finally the capitalised flag name is
used.
Here are some examples of flags with various permutations:
--name=NAME // Flag(...).String()
--name="Harry" // Flag(...).Default("Harry").String()
--name=FULL-NAME // flag(...).PlaceHolder("FULL-NAME").Default("Harry").String()
### Consuming all remaining arguments
A common command-line idiom is to use all remaining arguments for some
purpose. eg. The following command accepts an arbitrary number of
IP addresses as positional arguments:
./cmd ping 10.1.1.1 192.168.1.1
Kingpin supports this by having `Value` provide a `IsCumulative() bool`
function. If this function exists and returns true, the value parser will be
called repeatedly for every remaining argument.
Examples of this are the `Strings()` and `StringMap()` values.
To implement the above example we might do something like this:
```go
type ipList []net.IP
func (i *ipList) Set(value string) error {
if ip := net.ParseIP(value); ip == nil {
return fmt.Errorf("'%s' is not an IP address", value)
} else {
*i = append(*i, ip)
return nil
}
}
func (i *ipList) String() string {
return ""
}
func (i *ipList) IsCumulative() bool {
return true
}
func IPList(s Settings) (target *[]net.IP) {
target = new([]net.IP)
s.SetValue((*ipList)(target))
return
}
```
And use it like so:
```go
ips := IPList(kingpin.Arg("ips", "IP addresses to ping."))
```
### Custom help
Kingpin v2 supports templatised help using the text/template library (actually, [a fork](https://github.com/alecthomas/template)).
You can specify the template to use with the [Application.UsageTemplate()](http://godoc.org/gopkg.in/alecthomas/kingpin.v2#Application.UsageTemplate) function.
There are four included templates: `kingpin.DefaultUsageTemplate` is the default,
`kingpin.CompactUsageTemplate` provides a more compact representation for more complex command-line structures,
`kingpin.SeparateOptionalFlagsUsageTemplate` looks like the default template, but splits required
and optional command flags into separate lists, and `kingpin.ManPageTemplate` is used to generate man pages.
See the above templates for examples of usage, and the the function [UsageForContextWithTemplate()](https://github.com/alecthomas/kingpin/blob/master/usage.go#L198) method for details on the context.
#### Default help template
```
$ go run ./examples/curl/curl.go --help
usage: curl [<flags>] <command> [<args> ...]
An example implementation of curl.
Flags:
--help Show help.
-t, --timeout=5s Set connection timeout.
-H, --headers=HEADER=VALUE
Add HTTP headers to the request.
Commands:
help [<command>...]
Show help.
get url <url>
Retrieve a URL.
get file <file>
Retrieve a file.
post [<flags>] <url>
POST a resource.
```
#### Compact help template
```
$ go run ./examples/curl/curl.go --help
usage: curl [<flags>] <command> [<args> ...]
An example implementation of curl.
Flags:
--help Show help.
-t, --timeout=5s Set connection timeout.
-H, --headers=HEADER=VALUE
Add HTTP headers to the request.
Commands:
help [<command>...]
get [<flags>]
url <url>
file <file>
post [<flags>] <url>
```

42
vendor/github.com/alecthomas/kingpin/actions.go generated vendored Normal file
View File

@ -0,0 +1,42 @@
package kingpin
// Action callback executed at various stages after all values are populated.
// The application, commands, arguments and flags all have corresponding
// actions.
type Action func(*ParseContext) error
type actionMixin struct {
actions []Action
preActions []Action
}
type actionApplier interface {
applyActions(*ParseContext) error
applyPreActions(*ParseContext) error
}
func (a *actionMixin) addAction(action Action) {
a.actions = append(a.actions, action)
}
func (a *actionMixin) addPreAction(action Action) {
a.preActions = append(a.preActions, action)
}
func (a *actionMixin) applyActions(context *ParseContext) error {
for _, action := range a.actions {
if err := action(context); err != nil {
return err
}
}
return nil
}
func (a *actionMixin) applyPreActions(context *ParseContext) error {
for _, preAction := range a.preActions {
if err := preAction(context); err != nil {
return err
}
}
return nil
}

536
vendor/github.com/alecthomas/kingpin/app.go generated vendored Normal file
View File

@ -0,0 +1,536 @@
package kingpin
import (
"fmt"
"io"
"os"
"strings"
)
var (
ErrCommandNotSpecified = fmt.Errorf("command not specified")
)
type ApplicationValidator func(*Application) error
// An Application contains the definitions of flags, arguments and commands
// for an application.
type Application struct {
*flagGroup
*argGroup
*cmdGroup
actionMixin
initialized bool
Name string
Help string
author string
version string
writer io.Writer // Destination for usage and errors.
usageTemplate string
validator ApplicationValidator
terminate func(status int) // See Terminate()
noInterspersed bool // can flags be interspersed with args (or must they come first)
}
var (
// Global help flag. Exposed for user customisation.
HelpFlag *FlagClause
// Top-level help command. Exposed for user customisation. May be nil.
HelpCommand *CmdClause
// Global version flag. Exposed for user customisation. May be nil.
VersionFlag *FlagClause
)
// New creates a new Kingpin application instance.
func New(name, help string) *Application {
a := &Application{
flagGroup: newFlagGroup(),
argGroup: newArgGroup(),
Name: name,
Help: help,
writer: os.Stderr,
usageTemplate: DefaultUsageTemplate,
terminate: os.Exit,
}
a.cmdGroup = newCmdGroup(a)
HelpFlag = a.Flag("help", "Show context-sensitive help (also try --help-long and --help-man).")
HelpFlag.Bool()
a.Flag("help-long", "Generate long help.").Hidden().PreAction(a.generateLongHelp).Bool()
a.Flag("help-man", "Generate a man page.").Hidden().PreAction(a.generateManPage).Bool()
return a
}
func (a *Application) generateLongHelp(c *ParseContext) error {
a.Writer(os.Stdout)
if err := a.UsageForContextWithTemplate(c, 2, LongHelpTemplate); err != nil {
return err
}
a.terminate(0)
return nil
}
func (a *Application) generateManPage(c *ParseContext) error {
a.Writer(os.Stdout)
if err := a.UsageForContextWithTemplate(c, 2, ManPageTemplate); err != nil {
return err
}
a.terminate(0)
return nil
}
// Terminate specifies the termination handler. Defaults to os.Exit(status).
// If nil is passed, a no-op function will be used.
func (a *Application) Terminate(terminate func(int)) *Application {
if terminate == nil {
terminate = func(int) {}
}
a.terminate = terminate
return a
}
// Specify the writer to use for usage and errors. Defaults to os.Stderr.
func (a *Application) Writer(w io.Writer) *Application {
a.writer = w
return a
}
// UsageTemplate specifies the text template to use when displaying usage
// information. The default is UsageTemplate.
func (a *Application) UsageTemplate(template string) *Application {
a.usageTemplate = template
return a
}
// Validate sets a validation function to run when parsing.
func (a *Application) Validate(validator ApplicationValidator) *Application {
a.validator = validator
return a
}
// ParseContext parses the given command line and returns the fully populated
// ParseContext.
func (a *Application) ParseContext(args []string) (*ParseContext, error) {
return a.parseContext(false, args)
}
func (a *Application) parseContext(ignoreDefault bool, args []string) (*ParseContext, error) {
if err := a.init(); err != nil {
return nil, err
}
context := tokenize(args, ignoreDefault)
err := parse(context, a)
return context, err
}
// Parse parses command-line arguments. It returns the selected command and an
// error. The selected command will be a space separated subcommand, if
// subcommands have been configured.
//
// This will populate all flag and argument values, call all callbacks, and so
// on.
func (a *Application) Parse(args []string) (command string, err error) {
context, err := a.ParseContext(args)
if err != nil {
return "", err
}
a.maybeHelp(context)
if !context.EOL() {
return "", fmt.Errorf("unexpected argument '%s'", context.Peek())
}
command, err = a.execute(context)
if err == ErrCommandNotSpecified {
a.writeUsage(context, nil)
}
return command, err
}
func (a *Application) writeUsage(context *ParseContext, err error) {
if err != nil {
a.Errorf("%s", err)
}
if err := a.UsageForContext(context); err != nil {
panic(err)
}
a.terminate(1)
}
func (a *Application) maybeHelp(context *ParseContext) {
for _, element := range context.Elements {
if flag, ok := element.Clause.(*FlagClause); ok && flag == HelpFlag {
a.writeUsage(context, nil)
}
}
}
// findCommandFromArgs finds a command (if any) from the given command line arguments.
func (a *Application) findCommandFromArgs(args []string) (command string, err error) {
if err := a.init(); err != nil {
return "", err
}
context := tokenize(args, false)
if _, err := a.parse(context); err != nil {
return "", err
}
return a.findCommandFromContext(context), nil
}
// findCommandFromContext finds a command (if any) from a parsed context.
func (a *Application) findCommandFromContext(context *ParseContext) string {
commands := []string{}
for _, element := range context.Elements {
if c, ok := element.Clause.(*CmdClause); ok {
commands = append(commands, c.name)
}
}
return strings.Join(commands, " ")
}
// Version adds a --version flag for displaying the application version.
func (a *Application) Version(version string) *Application {
a.version = version
VersionFlag = a.Flag("version", "Show application version.").PreAction(func(*ParseContext) error {
fmt.Fprintln(a.writer, version)
a.terminate(0)
return nil
})
VersionFlag.Bool()
return a
}
func (a *Application) Author(author string) *Application {
a.author = author
return a
}
// Action callback to call when all values are populated and parsing is
// complete, but before any command, flag or argument actions.
//
// All Action() callbacks are called in the order they are encountered on the
// command line.
func (a *Application) Action(action Action) *Application {
a.addAction(action)
return a
}
// Action called after parsing completes but before validation and execution.
func (a *Application) PreAction(action Action) *Application {
a.addPreAction(action)
return a
}
// Command adds a new top-level command.
func (a *Application) Command(name, help string) *CmdClause {
return a.addCommand(name, help)
}
// Interspersed control if flags can be interspersed with positional arguments
//
// true (the default) means that they can, false means that all the flags must appear before the first positional arguments.
func (a *Application) Interspersed(interspersed bool) *Application {
a.noInterspersed = !interspersed
return a
}
func (a *Application) init() error {
if a.initialized {
return nil
}
if a.cmdGroup.have() && a.argGroup.have() {
return fmt.Errorf("can't mix top-level Arg()s with Command()s")
}
// If we have subcommands, add a help command at the top-level.
if a.cmdGroup.have() {
var command []string
HelpCommand = a.Command("help", "Show help.").PreAction(func(context *ParseContext) error {
a.Usage(command)
a.terminate(0)
return nil
})
HelpCommand.Arg("command", "Show help on command.").StringsVar(&command)
// Make help first command.
l := len(a.commandOrder)
a.commandOrder = append(a.commandOrder[l-1:l], a.commandOrder[:l-1]...)
}
if err := a.flagGroup.init(); err != nil {
return err
}
if err := a.cmdGroup.init(); err != nil {
return err
}
if err := a.argGroup.init(); err != nil {
return err
}
for _, cmd := range a.commands {
if err := cmd.init(); err != nil {
return err
}
}
flagGroups := []*flagGroup{a.flagGroup}
for _, cmd := range a.commandOrder {
if err := checkDuplicateFlags(cmd, flagGroups); err != nil {
return err
}
}
a.initialized = true
return nil
}
// Recursively check commands for duplicate flags.
func checkDuplicateFlags(current *CmdClause, flagGroups []*flagGroup) error {
// Check for duplicates.
for _, flags := range flagGroups {
for _, flag := range current.flagOrder {
if flag.shorthand != 0 {
if _, ok := flags.short[string(flag.shorthand)]; ok {
return fmt.Errorf("duplicate short flag -%c", flag.shorthand)
}
}
if _, ok := flags.long[flag.name]; ok {
return fmt.Errorf("duplicate long flag --%s", flag.name)
}
}
}
flagGroups = append(flagGroups, current.flagGroup)
// Check subcommands.
for _, subcmd := range current.commandOrder {
if err := checkDuplicateFlags(subcmd, flagGroups); err != nil {
return err
}
}
return nil
}
func (a *Application) execute(context *ParseContext) (string, error) {
var err error
selected := []string{}
if err = a.setDefaults(context); err != nil {
return "", err
}
selected, err = a.setValues(context)
if err != nil {
return "", err
}
if err = a.applyPreActions(context); err != nil {
return "", err
}
if err = a.validateRequired(context); err != nil {
return "", err
}
if err = a.applyValidators(context); err != nil {
return "", err
}
if err = a.applyActions(context); err != nil {
return "", err
}
command := strings.Join(selected, " ")
if command == "" && a.cmdGroup.have() {
return "", ErrCommandNotSpecified
}
return command, err
}
func (a *Application) setDefaults(context *ParseContext) error {
flagElements := map[string]*ParseElement{}
for _, element := range context.Elements {
if flag, ok := element.Clause.(*FlagClause); ok {
flagElements[flag.name] = element
}
}
argElements := map[string]*ParseElement{}
for _, element := range context.Elements {
if arg, ok := element.Clause.(*ArgClause); ok {
argElements[arg.name] = element
}
}
// Check required flags and set defaults.
for _, flag := range context.flags.long {
if flagElements[flag.name] == nil {
// Set defaults, if any.
if flag.defaultValue != "" {
if err := flag.value.Set(flag.defaultValue); err != nil {
return err
}
}
}
}
for _, arg := range context.arguments.args {
if argElements[arg.name] == nil {
// Set defaults, if any.
if arg.defaultValue != "" {
if err := arg.value.Set(arg.defaultValue); err != nil {
return err
}
}
}
}
return nil
}
func (a *Application) validateRequired(context *ParseContext) error {
flagElements := map[string]*ParseElement{}
for _, element := range context.Elements {
if flag, ok := element.Clause.(*FlagClause); ok {
flagElements[flag.name] = element
}
}
argElements := map[string]*ParseElement{}
for _, element := range context.Elements {
if arg, ok := element.Clause.(*ArgClause); ok {
argElements[arg.name] = element
}
}
// Check required flags and set defaults.
for _, flag := range context.flags.long {
if flagElements[flag.name] == nil {
// Check required flags were provided.
if flag.needsValue() {
return fmt.Errorf("required flag --%s not provided", flag.name)
}
}
}
for _, arg := range context.arguments.args {
if argElements[arg.name] == nil {
if arg.required {
return fmt.Errorf("required argument '%s' not provided", arg.name)
}
}
}
return nil
}
func (a *Application) setValues(context *ParseContext) (selected []string, err error) {
// Set all arg and flag values.
var lastCmd *CmdClause
for _, element := range context.Elements {
switch clause := element.Clause.(type) {
case *FlagClause:
if err = clause.value.Set(*element.Value); err != nil {
return
}
case *ArgClause:
if err = clause.value.Set(*element.Value); err != nil {
return
}
case *CmdClause:
if clause.validator != nil {
if err = clause.validator(clause); err != nil {
return
}
}
selected = append(selected, clause.name)
lastCmd = clause
}
}
if lastCmd != nil && len(lastCmd.commands) > 0 {
return nil, fmt.Errorf("must select a subcommand of '%s'", lastCmd.FullCommand())
}
return
}
func (a *Application) applyValidators(context *ParseContext) (err error) {
// Call command validation functions.
for _, element := range context.Elements {
if cmd, ok := element.Clause.(*CmdClause); ok && cmd.validator != nil {
if err = cmd.validator(cmd); err != nil {
return err
}
}
}
if a.validator != nil {
err = a.validator(a)
}
return err
}
func (a *Application) applyPreActions(context *ParseContext) error {
if err := a.actionMixin.applyPreActions(context); err != nil {
return err
}
// Dispatch to actions.
for _, element := range context.Elements {
if applier, ok := element.Clause.(actionApplier); ok {
if err := applier.applyPreActions(context); err != nil {
return err
}
}
}
return nil
}
func (a *Application) applyActions(context *ParseContext) error {
if err := a.actionMixin.applyActions(context); err != nil {
return err
}
// Dispatch to actions.
for _, element := range context.Elements {
if applier, ok := element.Clause.(actionApplier); ok {
if err := applier.applyActions(context); err != nil {
return err
}
}
}
return nil
}
// Errorf prints an error message to w in the format "<appname>: error: <message>".
func (a *Application) Errorf(format string, args ...interface{}) {
fmt.Fprintf(a.writer, a.Name+": error: "+format+"\n", args...)
}
// Fatalf writes a formatted error to w then terminates with exit status 1.
func (a *Application) Fatalf(format string, args ...interface{}) {
a.Errorf(format, args...)
a.terminate(1)
}
// FatalUsage prints an error message followed by usage information, then
// exits with a non-zero status.
func (a *Application) FatalUsage(format string, args ...interface{}) {
a.Errorf(format, args...)
a.Usage([]string{})
a.terminate(1)
}
// FatalUsageContext writes a printf formatted error message to w, then usage
// information for the given ParseContext, before exiting.
func (a *Application) FatalUsageContext(context *ParseContext, format string, args ...interface{}) {
a.Errorf(format, args...)
if err := a.UsageForContext(context); err != nil {
panic(err)
}
a.terminate(1)
}
// FatalIfError prints an error and exits if err is not nil. The error is printed
// with the given formatted string, if any.
func (a *Application) FatalIfError(err error, format string, args ...interface{}) {
if err != nil {
prefix := ""
if format != "" {
prefix = fmt.Sprintf(format, args...) + ": "
}
a.Errorf(prefix+"%s", err)
a.terminate(1)
}
}

197
vendor/github.com/alecthomas/kingpin/app_test.go generated vendored Normal file
View File

@ -0,0 +1,197 @@
package kingpin
import (
"io/ioutil"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
func TestCommander(t *testing.T) {
c := New("test", "test")
ping := c.Command("ping", "Ping an IP address.")
pingTTL := ping.Flag("ttl", "TTL for ICMP packets").Short('t').Default("5s").Duration()
selected, err := c.Parse([]string{"ping"})
assert.NoError(t, err)
assert.Equal(t, "ping", selected)
assert.Equal(t, 5*time.Second, *pingTTL)
selected, err = c.Parse([]string{"ping", "--ttl=10s"})
assert.NoError(t, err)
assert.Equal(t, "ping", selected)
assert.Equal(t, 10*time.Second, *pingTTL)
}
func TestRequiredFlags(t *testing.T) {
c := New("test", "test")
c.Flag("a", "a").String()
c.Flag("b", "b").Required().String()
_, err := c.Parse([]string{"--a=foo"})
assert.Error(t, err)
_, err = c.Parse([]string{"--b=foo"})
assert.NoError(t, err)
}
func TestInvalidDefaultFlagValueErrors(t *testing.T) {
c := New("test", "test")
c.Flag("foo", "foo").Default("a").Int()
_, err := c.Parse([]string{})
assert.Error(t, err)
}
func TestInvalidDefaultArgValueErrors(t *testing.T) {
c := New("test", "test")
cmd := c.Command("cmd", "cmd")
cmd.Arg("arg", "arg").Default("one").Int()
_, err := c.Parse([]string{"cmd"})
assert.Error(t, err)
}
func TestArgsRequiredAfterNonRequiredErrors(t *testing.T) {
c := New("test", "test")
cmd := c.Command("cmd", "")
cmd.Arg("a", "a").String()
cmd.Arg("b", "b").Required().String()
_, err := c.Parse([]string{"cmd"})
assert.Error(t, err)
}
func TestArgsMultipleRequiredThenNonRequired(t *testing.T) {
c := New("test", "test").Terminate(nil).Writer(ioutil.Discard)
cmd := c.Command("cmd", "")
cmd.Arg("a", "a").Required().String()
cmd.Arg("b", "b").Required().String()
cmd.Arg("c", "c").String()
cmd.Arg("d", "d").String()
_, err := c.Parse([]string{"cmd", "a", "b"})
assert.NoError(t, err)
_, err = c.Parse([]string{})
assert.Error(t, err)
}
func TestDispatchCallbackIsCalled(t *testing.T) {
dispatched := false
c := New("test", "")
c.Command("cmd", "").Action(func(*ParseContext) error {
dispatched = true
return nil
})
_, err := c.Parse([]string{"cmd"})
assert.NoError(t, err)
assert.True(t, dispatched)
}
func TestTopLevelArgWorks(t *testing.T) {
c := New("test", "test")
s := c.Arg("arg", "help").String()
_, err := c.Parse([]string{"foo"})
assert.NoError(t, err)
assert.Equal(t, "foo", *s)
}
func TestTopLevelArgCantBeUsedWithCommands(t *testing.T) {
c := New("test", "test")
c.Arg("arg", "help").String()
c.Command("cmd", "help")
_, err := c.Parse([]string{})
assert.Error(t, err)
}
func TestTooManyArgs(t *testing.T) {
a := New("test", "test")
a.Arg("a", "").String()
_, err := a.Parse([]string{"a", "b"})
assert.Error(t, err)
}
func TestTooManyArgsAfterCommand(t *testing.T) {
a := New("test", "test")
a.Command("a", "")
assert.NoError(t, a.init())
_, err := a.Parse([]string{"a", "b"})
assert.Error(t, err)
}
func TestArgsLooksLikeFlagsWithConsumeRemainder(t *testing.T) {
a := New("test", "")
a.Arg("opts", "").Required().Strings()
_, err := a.Parse([]string{"hello", "-world"})
assert.Error(t, err)
}
func TestCommandParseDoesNotResetFlagsToDefault(t *testing.T) {
app := New("test", "")
flag := app.Flag("flag", "").Default("default").String()
app.Command("cmd", "")
_, err := app.Parse([]string{"--flag=123", "cmd"})
assert.NoError(t, err)
assert.Equal(t, "123", *flag)
}
func TestCommandParseDoesNotFailRequired(t *testing.T) {
app := New("test", "")
flag := app.Flag("flag", "").Required().String()
app.Command("cmd", "")
_, err := app.Parse([]string{"cmd", "--flag=123"})
assert.NoError(t, err)
assert.Equal(t, "123", *flag)
}
func TestSelectedCommand(t *testing.T) {
app := New("test", "help")
c0 := app.Command("c0", "")
c0.Command("c1", "")
s, err := app.Parse([]string{"c0", "c1"})
assert.NoError(t, err)
assert.Equal(t, "c0 c1", s)
}
func TestSubCommandRequired(t *testing.T) {
app := New("test", "help")
c0 := app.Command("c0", "")
c0.Command("c1", "")
_, err := app.Parse([]string{"c0"})
assert.Error(t, err)
}
func TestInterspersedFalse(t *testing.T) {
app := New("test", "help").Interspersed(false)
a1 := app.Arg("a1", "").String()
a2 := app.Arg("a2", "").String()
f1 := app.Flag("flag", "").String()
_, err := app.Parse([]string{"a1", "--flag=flag"})
assert.NoError(t, err)
assert.Equal(t, "a1", *a1)
assert.Equal(t, "--flag=flag", *a2)
assert.Equal(t, "", *f1)
}
func TestInterspersedTrue(t *testing.T) {
// test once with the default value and once with explicit true
for i := 0; i < 2; i++ {
app := New("test", "help")
if i != 0 {
t.Log("Setting explicit")
app.Interspersed(true)
} else {
t.Log("Using default")
}
a1 := app.Arg("a1", "").String()
a2 := app.Arg("a2", "").String()
f1 := app.Flag("flag", "").String()
_, err := app.Parse([]string{"a1", "--flag=flag"})
assert.NoError(t, err)
assert.Equal(t, "a1", *a1)
assert.Equal(t, "", *a2)
assert.Equal(t, "flag", *f1)
}
}

105
vendor/github.com/alecthomas/kingpin/args.go generated vendored Normal file
View File

@ -0,0 +1,105 @@
package kingpin
import "fmt"
type argGroup struct {
args []*ArgClause
}
func newArgGroup() *argGroup {
return &argGroup{}
}
func (a *argGroup) have() bool {
return len(a.args) > 0
}
func (a *argGroup) Arg(name, help string) *ArgClause {
arg := newArg(name, help)
a.args = append(a.args, arg)
return arg
}
func (a *argGroup) init() error {
required := 0
seen := map[string]struct{}{}
previousArgMustBeLast := false
for i, arg := range a.args {
if previousArgMustBeLast {
return fmt.Errorf("Args() can't be followed by another argument '%s'", arg.name)
}
if arg.consumesRemainder() {
previousArgMustBeLast = true
}
if _, ok := seen[arg.name]; ok {
return fmt.Errorf("duplicate argument '%s'", arg.name)
}
seen[arg.name] = struct{}{}
if arg.required && required != i {
return fmt.Errorf("required arguments found after non-required")
}
if arg.required {
required++
}
if err := arg.init(); err != nil {
return err
}
}
return nil
}
type ArgClause struct {
actionMixin
parserMixin
name string
help string
defaultValue string
required bool
}
func newArg(name, help string) *ArgClause {
a := &ArgClause{
name: name,
help: help,
}
return a
}
func (a *ArgClause) consumesRemainder() bool {
if r, ok := a.value.(remainderArg); ok {
return r.IsCumulative()
}
return false
}
// Required arguments must be input by the user. They can not have a Default() value provided.
func (a *ArgClause) Required() *ArgClause {
a.required = true
return a
}
// Default value for this argument. It *must* be parseable by the value of the argument.
func (a *ArgClause) Default(value string) *ArgClause {
a.defaultValue = value
return a
}
func (a *ArgClause) Action(action Action) *ArgClause {
a.addAction(action)
return a
}
func (a *ArgClause) PreAction(action Action) *ArgClause {
a.addPreAction(action)
return a
}
func (a *ArgClause) init() error {
if a.required && a.defaultValue != "" {
return fmt.Errorf("required argument '%s' with unusable default value", a.name)
}
if a.value == nil {
return fmt.Errorf("no parser defined for arg '%s'", a.name)
}
return nil
}

49
vendor/github.com/alecthomas/kingpin/args_test.go generated vendored Normal file
View File

@ -0,0 +1,49 @@
package kingpin
import (
"io/ioutil"
"testing"
"github.com/stretchr/testify/assert"
)
func TestArgRemainder(t *testing.T) {
app := New("test", "")
v := app.Arg("test", "").Strings()
args := []string{"hello", "world"}
_, err := app.Parse(args)
assert.NoError(t, err)
assert.Equal(t, args, *v)
}
func TestArgRemainderErrorsWhenNotLast(t *testing.T) {
a := newArgGroup()
a.Arg("test", "").Strings()
a.Arg("test2", "").String()
assert.Error(t, a.init())
}
func TestArgMultipleRequired(t *testing.T) {
terminated := false
app := New("test", "")
app.Version("0.0.0").Writer(ioutil.Discard)
app.Arg("a", "").Required().String()
app.Arg("b", "").Required().String()
app.Terminate(func(int) { terminated = true })
_, err := app.Parse([]string{})
assert.Error(t, err)
_, err = app.Parse([]string{"A"})
assert.Error(t, err)
_, err = app.Parse([]string{"A", "B"})
assert.NoError(t, err)
_, err = app.Parse([]string{"--version"})
assert.True(t, terminated)
}
func TestInvalidArgsDefaultCanBeOverridden(t *testing.T) {
app := New("test", "")
app.Arg("a", "").Default("invalid").Bool()
_, err := app.Parse([]string{})
assert.Error(t, err)
}

161
vendor/github.com/alecthomas/kingpin/cmd.go generated vendored Normal file
View File

@ -0,0 +1,161 @@
package kingpin
import (
"fmt"
"strings"
)
type cmdGroup struct {
app *Application
parent *CmdClause
commands map[string]*CmdClause
commandOrder []*CmdClause
}
func (c *cmdGroup) defaultSubcommand() *CmdClause {
for _, cmd := range c.commandOrder {
if cmd.isDefault {
return cmd
}
}
return nil
}
func newCmdGroup(app *Application) *cmdGroup {
return &cmdGroup{
app: app,
commands: make(map[string]*CmdClause),
}
}
func (c *cmdGroup) flattenedCommands() (out []*CmdClause) {
for _, cmd := range c.commandOrder {
if len(cmd.commands) == 0 {
out = append(out, cmd)
}
out = append(out, cmd.flattenedCommands()...)
}
return
}
func (c *cmdGroup) addCommand(name, help string) *CmdClause {
cmd := newCommand(c.app, name, help)
c.commands[name] = cmd
c.commandOrder = append(c.commandOrder, cmd)
return cmd
}
func (c *cmdGroup) init() error {
seen := map[string]bool{}
if c.defaultSubcommand() != nil && !c.have() {
return fmt.Errorf("default subcommand %q provided but no subcommands defined", c.defaultSubcommand().name)
}
defaults := []string{}
for _, cmd := range c.commandOrder {
if cmd.isDefault {
defaults = append(defaults, cmd.name)
}
if seen[cmd.name] {
return fmt.Errorf("duplicate command %q", cmd.name)
}
seen[cmd.name] = true
if err := cmd.init(); err != nil {
return err
}
}
if len(defaults) > 1 {
return fmt.Errorf("more than one default subcommand exists: %s", strings.Join(defaults, ", "))
}
return nil
}
func (c *cmdGroup) have() bool {
return len(c.commands) > 0
}
type CmdClauseValidator func(*CmdClause) error
// A CmdClause is a single top-level command. It encapsulates a set of flags
// and either subcommands or positional arguments.
type CmdClause struct {
actionMixin
*flagGroup
*argGroup
*cmdGroup
app *Application
name string
help string
isDefault bool
validator CmdClauseValidator
hidden bool
}
func newCommand(app *Application, name, help string) *CmdClause {
c := &CmdClause{
flagGroup: newFlagGroup(),
argGroup: newArgGroup(),
cmdGroup: newCmdGroup(app),
app: app,
name: name,
help: help,
}
return c
}
// Validate sets a validation function to run when parsing.
func (c *CmdClause) Validate(validator CmdClauseValidator) *CmdClause {
c.validator = validator
return c
}
func (c *CmdClause) FullCommand() string {
out := []string{c.name}
for p := c.parent; p != nil; p = p.parent {
out = append([]string{p.name}, out...)
}
return strings.Join(out, " ")
}
// Command adds a new sub-command.
func (c *CmdClause) Command(name, help string) *CmdClause {
cmd := c.addCommand(name, help)
cmd.parent = c
return cmd
}
// Default makes this command the default if commands don't match.
func (c *CmdClause) Default() *CmdClause {
c.isDefault = true
return c
}
func (c *CmdClause) Action(action Action) *CmdClause {
c.addAction(action)
return c
}
func (c *CmdClause) PreAction(action Action) *CmdClause {
c.addPreAction(action)
return c
}
func (c *CmdClause) init() error {
if err := c.flagGroup.init(); err != nil {
return err
}
if c.argGroup.have() && c.cmdGroup.have() {
return fmt.Errorf("can't mix Arg()s with Command()s")
}
if err := c.argGroup.init(); err != nil {
return err
}
if err := c.cmdGroup.init(); err != nil {
return err
}
return nil
}
func (c *CmdClause) Hidden() *CmdClause {
c.hidden = true
return c
}

View File

@ -0,0 +1,121 @@
package main
import (
"encoding/json"
"os/exec"
"strings"
"text/template"
"os"
)
const (
tmpl = `package kingpin
// This file is autogenerated by "go generate .". Do not modify.
{{range .}}
{{if not .NoValueParser}}
// -- {{.Type}} Value
type {{.Type}}Value {{.Type}}
func new{{.|Name}}Value(p *{{.Type}}) *{{.Type}}Value {
return (*{{.Type}}Value)(p)
}
func (f *{{.Type}}Value) Set(s string) error {
v, err := {{.Parser}}
*f = {{.Type}}Value(v)
return err
}
func (f *{{.Type}}Value) Get() interface{} { return {{.Type}}(*f) }
func (f *{{.Type}}Value) String() string { return {{.|Format}} }
// {{.|Name}} parses the next command-line value as {{.Type}}.
func (p *parserMixin) {{.|Name}}() (target *{{.Type}}) {
target = new({{.Type}})
p.{{.|Name}}Var(target)
return
}
func (p *parserMixin) {{.|Name}}Var(target *{{.Type}}) {
p.SetValue(new{{.|Name}}Value(target))
}
{{end}}
// {{.|Plural}} accumulates {{.Type}} values into a slice.
func (p *parserMixin) {{.|Plural}}() (target *[]{{.Type}}) {
target = new([]{{.Type}})
p.{{.|Plural}}Var(target)
return
}
func (p *parserMixin) {{.|Plural}}Var(target *[]{{.Type}}) {
p.SetValue(newAccumulator(target, func(v interface{}) Value { return new{{.|Name}}Value(v.(*{{.Type}})) }))
}
{{end}}
`
)
type Value struct {
Name string `json:"name"`
NoValueParser bool `json:"no_value_parser"`
Type string `json:"type"`
Parser string `json:"parser"`
Format string `json:"format"`
Plural string `json:"plural"`
}
func fatalIfError(err error) {
if err != nil {
panic(err)
}
}
func main() {
r, err := os.Open("values.json")
fatalIfError(err)
defer r.Close()
v := []Value{}
err = json.NewDecoder(r).Decode(&v)
fatalIfError(err)
valueName := func(v *Value) string {
if v.Name != "" {
return v.Name
}
return strings.Title(v.Type)
}
t, err := template.New("genvalues").Funcs(template.FuncMap{
"Lower": strings.ToLower,
"Format": func(v *Value) string {
if v.Format != "" {
return v.Format
}
return "fmt.Sprintf(\"%v\", *f)"
},
"Name": valueName,
"Plural": func(v *Value) string {
if v.Plural != "" {
return v.Plural
}
return valueName(v) + "List"
},
}).Parse(tmpl)
fatalIfError(err)
w, err := os.Create("values_generated.go")
fatalIfError(err)
defer w.Close()
err = t.Execute(w, v)
fatalIfError(err)
err = exec.Command("goimports", "-w", "values_generated.go").Run()
fatalIfError(err)
}

157
vendor/github.com/alecthomas/kingpin/cmd_test.go generated vendored Normal file
View File

@ -0,0 +1,157 @@
package kingpin
import (
"strings"
"github.com/stretchr/testify/assert"
"testing"
)
func parseAndExecute(app *Application, context *ParseContext) (string, error) {
if err := parse(context, app); err != nil {
return "", err
}
return app.execute(context)
}
func TestNestedCommands(t *testing.T) {
app := New("app", "")
sub1 := app.Command("sub1", "")
sub1.Flag("sub1", "")
subsub1 := sub1.Command("sub1sub1", "")
subsub1.Command("sub1sub1end", "")
sub2 := app.Command("sub2", "")
sub2.Flag("sub2", "")
sub2.Command("sub2sub1", "")
context := tokenize([]string{"sub1", "sub1sub1", "sub1sub1end"}, false)
selected, err := parseAndExecute(app, context)
assert.NoError(t, err)
assert.True(t, context.EOL())
assert.Equal(t, "sub1 sub1sub1 sub1sub1end", selected)
}
func TestNestedCommandsWithArgs(t *testing.T) {
app := New("app", "")
cmd := app.Command("a", "").Command("b", "")
a := cmd.Arg("a", "").String()
b := cmd.Arg("b", "").String()
context := tokenize([]string{"a", "b", "c", "d"}, false)
selected, err := parseAndExecute(app, context)
assert.NoError(t, err)
assert.True(t, context.EOL())
assert.Equal(t, "a b", selected)
assert.Equal(t, "c", *a)
assert.Equal(t, "d", *b)
}
func TestNestedCommandsWithFlags(t *testing.T) {
app := New("app", "")
cmd := app.Command("a", "").Command("b", "")
a := cmd.Flag("aaa", "").Short('a').String()
b := cmd.Flag("bbb", "").Short('b').String()
err := app.init()
assert.NoError(t, err)
context := tokenize(strings.Split("a b --aaa x -b x", " "), false)
selected, err := parseAndExecute(app, context)
assert.NoError(t, err)
assert.True(t, context.EOL())
assert.Equal(t, "a b", selected)
assert.Equal(t, "x", *a)
assert.Equal(t, "x", *b)
}
func TestNestedCommandWithMergedFlags(t *testing.T) {
app := New("app", "")
cmd0 := app.Command("a", "")
cmd0f0 := cmd0.Flag("aflag", "").Bool()
// cmd1 := app.Command("b", "")
// cmd1f0 := cmd0.Flag("bflag", "").Bool()
cmd00 := cmd0.Command("aa", "")
cmd00f0 := cmd00.Flag("aaflag", "").Bool()
err := app.init()
assert.NoError(t, err)
context := tokenize(strings.Split("a aa --aflag --aaflag", " "), false)
selected, err := parseAndExecute(app, context)
assert.NoError(t, err)
assert.True(t, *cmd0f0)
assert.True(t, *cmd00f0)
assert.Equal(t, "a aa", selected)
}
func TestNestedCommandWithDuplicateFlagErrors(t *testing.T) {
app := New("app", "")
app.Flag("test", "").Bool()
app.Command("cmd0", "").Flag("test", "").Bool()
err := app.init()
assert.Error(t, err)
}
func TestNestedCommandWithArgAndMergedFlags(t *testing.T) {
app := New("app", "")
cmd0 := app.Command("a", "")
cmd0f0 := cmd0.Flag("aflag", "").Bool()
// cmd1 := app.Command("b", "")
// cmd1f0 := cmd0.Flag("bflag", "").Bool()
cmd00 := cmd0.Command("aa", "")
cmd00a0 := cmd00.Arg("arg", "").String()
cmd00f0 := cmd00.Flag("aaflag", "").Bool()
err := app.init()
assert.NoError(t, err)
context := tokenize(strings.Split("a aa hello --aflag --aaflag", " "), false)
selected, err := parseAndExecute(app, context)
assert.NoError(t, err)
assert.True(t, *cmd0f0)
assert.True(t, *cmd00f0)
assert.Equal(t, "a aa", selected)
assert.Equal(t, "hello", *cmd00a0)
}
func TestDefaultSubcommandEOL(t *testing.T) {
app := New("app", "").Terminate(nil)
c0 := app.Command("c0", "").Default()
c0.Command("c01", "").Default()
c0.Command("c02", "")
cmd, err := app.Parse([]string{"c0"})
assert.NoError(t, err)
assert.Equal(t, "c0 c01", cmd)
}
func TestDefaultSubcommandWithArg(t *testing.T) {
app := New("app", "").Terminate(nil)
c0 := app.Command("c0", "").Default()
c01 := c0.Command("c01", "").Default()
c012 := c01.Command("c012", "").Default()
a0 := c012.Arg("a0", "").String()
c0.Command("c02", "")
cmd, err := app.Parse([]string{"c0", "hello"})
assert.NoError(t, err)
assert.Equal(t, "c0 c01 c012", cmd)
assert.Equal(t, "hello", *a0)
}
func TestDefaultSubcommandWithFlags(t *testing.T) {
app := New("app", "").Terminate(nil)
c0 := app.Command("c0", "").Default()
_ = c0.Flag("f0", "").Int()
c0c1 := c0.Command("c1", "").Default()
c0c1f1 := c0c1.Flag("f1", "").Int()
selected, err := app.Parse([]string{"--f1=2"})
assert.NoError(t, err)
assert.Equal(t, "c0 c1", selected)
assert.Equal(t, 2, *c0c1f1)
_, err = app.Parse([]string{"--f2"})
assert.Error(t, err)
}
func TestMultipleDefaultCommands(t *testing.T) {
app := New("app", "").Terminate(nil)
app.Command("c0", "").Default()
app.Command("c1", "").Default()
_, err := app.Parse([]string{})
assert.Error(t, err)
}

68
vendor/github.com/alecthomas/kingpin/doc.go generated vendored Normal file
View File

@ -0,0 +1,68 @@
// Package kingpin provides command line interfaces like this:
//
// $ chat
// usage: chat [<flags>] <command> [<flags>] [<args> ...]
//
// Flags:
// --debug enable debug mode
// --help Show help.
// --server=127.0.0.1 server address
//
// Commands:
// help <command>
// Show help for a command.
//
// post [<flags>] <channel>
// Post a message to a channel.
//
// register <nick> <name>
// Register a new user.
//
// $ chat help post
// usage: chat [<flags>] post [<flags>] <channel> [<text>]
//
// Post a message to a channel.
//
// Flags:
// --image=IMAGE image to post
//
// Args:
// <channel> channel to post to
// [<text>] text to post
// $ chat post --image=~/Downloads/owls.jpg pics
//
// From code like this:
//
// package main
//
// import "gopkg.in/alecthomas/kingpin.v1"
//
// var (
// debug = kingpin.Flag("debug", "enable debug mode").Default("false").Bool()
// serverIP = kingpin.Flag("server", "server address").Default("127.0.0.1").IP()
//
// register = kingpin.Command("register", "Register a new user.")
// registerNick = register.Arg("nick", "nickname for user").Required().String()
// registerName = register.Arg("name", "name of user").Required().String()
//
// post = kingpin.Command("post", "Post a message to a channel.")
// postImage = post.Flag("image", "image to post").ExistingFile()
// postChannel = post.Arg("channel", "channel to post to").Required().String()
// postText = post.Arg("text", "text to post").String()
// )
//
// func main() {
// switch kingpin.Parse() {
// // Register user
// case "register":
// println(*registerNick)
//
// // Post message
// case "post":
// if *postImage != nil {
// }
// if *postText != "" {
// }
// }
// }
package kingpin

View File

@ -0,0 +1,20 @@
package main
import (
"fmt"
"github.com/alecthomas/kingpin"
)
var (
debug = kingpin.Flag("debug", "Enable debug mode.").Bool()
timeout = kingpin.Flag("timeout", "Timeout waiting for ping.").Default("5s").OverrideDefaultFromEnvar("PING_TIMEOUT").Short('t').Duration()
ip = kingpin.Arg("ip", "IP address to ping.").Required().IP()
count = kingpin.Arg("count", "Number of packets to send").Int()
)
func main() {
kingpin.Version("0.0.1")
kingpin.Parse()
fmt.Printf("Would ping: %s with timeout %s and count %d", *ip, *timeout, *count)
}

View File

@ -0,0 +1,38 @@
package main
import (
"os"
"strings"
"github.com/alecthomas/kingpin"
)
var (
app = kingpin.New("chat", "A command-line chat application.")
debug = app.Flag("debug", "Enable debug mode.").Bool()
serverIP = app.Flag("server", "Server address.").Default("127.0.0.1").IP()
register = app.Command("register", "Register a new user.")
registerNick = register.Arg("nick", "Nickname for user.").Required().String()
registerName = register.Arg("name", "Name of user.").Required().String()
post = app.Command("post", "Post a message to a channel.")
postImage = post.Flag("image", "Image to post.").File()
postChannel = post.Arg("channel", "Channel to post to.").Required().String()
postText = post.Arg("text", "Text to post.").Strings()
)
func main() {
switch kingpin.MustParse(app.Parse(os.Args[1:])) {
// Register user
case register.FullCommand():
println(*registerNick)
// Post message
case post.FullCommand():
if *postImage != nil {
}
text := strings.Join(*postText, " ")
println("Post:", text)
}
}

View File

@ -0,0 +1,105 @@
// A curl-like HTTP command-line client.
package main
import (
"errors"
"fmt"
"io"
"net/http"
"os"
"strings"
"github.com/alecthomas/kingpin"
)
var (
timeout = kingpin.Flag("timeout", "Set connection timeout.").Short('t').Default("5s").Duration()
headers = HTTPHeader(kingpin.Flag("headers", "Add HTTP headers to the request.").Short('H').PlaceHolder("HEADER=VALUE"))
get = kingpin.Command("get", "GET a resource.").Default()
getFlag = get.Flag("test", "Test flag").Bool()
getURL = get.Command("url", "Retrieve a URL.").Default()
getURLURL = getURL.Arg("url", "URL to GET.").Required().URL()
getFile = get.Command("file", "Retrieve a file.")
getFileFile = getFile.Arg("file", "File to retrieve.").Required().ExistingFile()
post = kingpin.Command("post", "POST a resource.")
postData = post.Flag("data", "Key-value data to POST").Short('d').PlaceHolder("KEY:VALUE").StringMap()
postBinaryFile = post.Flag("data-binary", "File with binary data to POST.").File()
postURL = post.Arg("url", "URL to POST to.").Required().URL()
)
type HTTPHeaderValue http.Header
func (h HTTPHeaderValue) Set(value string) error {
parts := strings.SplitN(value, "=", 2)
if len(parts) != 2 {
return fmt.Errorf("expected HEADER=VALUE got '%s'", value)
}
(http.Header)(h).Add(parts[0], parts[1])
return nil
}
func (h HTTPHeaderValue) String() string {
return ""
}
func HTTPHeader(s kingpin.Settings) (target *http.Header) {
target = &http.Header{}
s.SetValue((*HTTPHeaderValue)(target))
return
}
func applyRequest(req *http.Request) error {
req.Header = *headers
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode > 299 {
return fmt.Errorf("HTTP request failed: %s", resp.Status)
}
_, err = io.Copy(os.Stdout, resp.Body)
return err
}
func apply(method string, url string) error {
req, err := http.NewRequest(method, url, nil)
if err != nil {
return err
}
return applyRequest(req)
}
func applyPOST() error {
req, err := http.NewRequest("POST", (*postURL).String(), nil)
if err != nil {
return err
}
if len(*postData) > 0 {
for key, value := range *postData {
req.Form.Set(key, value)
}
} else if postBinaryFile != nil {
if headers.Get("Content-Type") != "" {
headers.Set("Content-Type", "application/octet-stream")
}
req.Body = *postBinaryFile
} else {
return errors.New("--data or --data-binary must be provided to POST")
}
return applyRequest(req)
}
func main() {
kingpin.UsageTemplate(kingpin.CompactUsageTemplate).Version("1.0").Author("Alec Thomas")
kingpin.CommandLine.Help = "An example implementation of curl."
switch kingpin.Parse() {
case "get url":
kingpin.FatalIfError(apply("GET", (*getURLURL).String()), "GET failed")
case "post":
kingpin.FatalIfError(applyPOST(), "POST failed")
}
}

View File

@ -0,0 +1,30 @@
package main
import (
"fmt"
"os"
"github.com/alecthomas/kingpin"
)
// Context for "ls" command
type LsCommand struct {
All bool
}
func (l *LsCommand) run(c *kingpin.ParseContext) error {
fmt.Printf("all=%v\n", l.All)
return nil
}
func configureLsCommand(app *kingpin.Application) {
c := &LsCommand{}
ls := app.Command("ls", "List files.").Action(c.run)
ls.Flag("all", "List all files.").Short('a').BoolVar(&c.All)
}
func main() {
app := kingpin.New("modular", "My modular application.")
configureLsCommand(app)
kingpin.MustParse(app.Parse(os.Args[1:]))
}

View File

@ -0,0 +1,20 @@
package main
import (
"fmt"
"github.com/alecthomas/kingpin"
)
var (
debug = kingpin.Flag("debug", "Enable debug mode.").Bool()
timeout = kingpin.Flag("timeout", "Timeout waiting for ping.").OverrideDefaultFromEnvar("PING_TIMEOUT").Required().Short('t').Duration()
ip = kingpin.Arg("ip", "IP address to ping.").Required().IP()
count = kingpin.Arg("count", "Number of packets to send").Int()
)
func main() {
kingpin.Version("0.0.1")
kingpin.Parse()
fmt.Printf("Would ping: %s with timeout %s and count %d", *ip, *timeout, *count)
}

42
vendor/github.com/alecthomas/kingpin/examples_test.go generated vendored Normal file
View File

@ -0,0 +1,42 @@
package kingpin
import (
"fmt"
"net/http"
"strings"
)
type HTTPHeaderValue http.Header
func (h *HTTPHeaderValue) Set(value string) error {
parts := strings.SplitN(value, ":", 2)
if len(parts) != 2 {
return fmt.Errorf("expected HEADER:VALUE got '%s'", value)
}
(*http.Header)(h).Add(parts[0], parts[1])
return nil
}
func (h *HTTPHeaderValue) String() string {
return ""
}
func HTTPHeader(s Settings) (target *http.Header) {
target = new(http.Header)
s.SetValue((*HTTPHeaderValue)(target))
return
}
// This example ilustrates how to define custom parsers. HTTPHeader
// cumulatively parses each encountered --header flag into a http.Header struct.
func ExampleValue() {
var (
curl = New("curl", "transfer a URL")
headers = HTTPHeader(curl.Flag("headers", "Add HTTP headers to the request.").Short('H').PlaceHolder("HEADER:VALUE"))
)
curl.Parse([]string{"-H Content-Type:application/octet-stream"})
for key, value := range *headers {
fmt.Printf("%s = %s\n", key, value)
}
}

237
vendor/github.com/alecthomas/kingpin/flags.go generated vendored Normal file
View File

@ -0,0 +1,237 @@
package kingpin
import (
"fmt"
"os"
"strings"
)
type flagGroup struct {
short map[string]*FlagClause
long map[string]*FlagClause
flagOrder []*FlagClause
}
func newFlagGroup() *flagGroup {
return &flagGroup{
short: make(map[string]*FlagClause),
long: make(map[string]*FlagClause),
}
}
func (f *flagGroup) merge(o *flagGroup) {
for _, flag := range o.flagOrder {
if flag.shorthand != 0 {
f.short[string(flag.shorthand)] = flag
}
f.long[flag.name] = flag
f.flagOrder = append(f.flagOrder, flag)
}
}
// Flag defines a new flag with the given long name and help.
func (f *flagGroup) Flag(name, help string) *FlagClause {
flag := newFlag(name, help)
f.long[name] = flag
f.flagOrder = append(f.flagOrder, flag)
return flag
}
func (f *flagGroup) init() error {
for _, flag := range f.long {
if err := flag.init(); err != nil {
return err
}
if flag.shorthand != 0 {
f.short[string(flag.shorthand)] = flag
}
}
return nil
}
func (f *flagGroup) parse(context *ParseContext) (*FlagClause, error) {
var token *Token
loop:
for {
token = context.Peek()
switch token.Type {
case TokenEOL:
break loop
case TokenLong, TokenShort:
flagToken := token
defaultValue := ""
var flag *FlagClause
var ok bool
invert := false
name := token.Value
if token.Type == TokenLong {
if strings.HasPrefix(name, "no-") {
name = name[3:]
invert = true
}
flag, ok = f.long[name]
if !ok {
return nil, fmt.Errorf("unknown long flag '%s'", flagToken)
}
} else {
flag, ok = f.short[name]
if !ok {
return nil, fmt.Errorf("unknown short flag '%s'", flagToken)
}
}
context.Next()
fb, ok := flag.value.(boolFlag)
if ok && fb.IsBoolFlag() {
if invert {
defaultValue = "false"
} else {
defaultValue = "true"
}
} else {
if invert {
context.Push(token)
return nil, fmt.Errorf("unknown long flag '%s'", flagToken)
}
token = context.Peek()
if token.Type != TokenArg {
context.Push(token)
return nil, fmt.Errorf("expected argument for flag '%s'", flagToken)
}
context.Next()
defaultValue = token.Value
}
context.matchedFlag(flag, defaultValue)
return flag, nil
default:
break loop
}
}
return nil, nil
}
func (f *flagGroup) visibleFlags() int {
count := 0
for _, flag := range f.long {
if !flag.hidden {
count++
}
}
return count
}
// FlagClause is a fluid interface used to build flags.
type FlagClause struct {
parserMixin
actionMixin
name string
shorthand byte
help string
envar string
defaultValue string
placeholder string
hidden bool
}
func newFlag(name, help string) *FlagClause {
f := &FlagClause{
name: name,
help: help,
}
return f
}
func (f *FlagClause) needsValue() bool {
return f.required && f.defaultValue == ""
}
func (f *FlagClause) formatPlaceHolder() string {
if f.placeholder != "" {
return f.placeholder
}
if f.defaultValue != "" {
if _, ok := f.value.(*stringValue); ok {
return fmt.Sprintf("%q", f.defaultValue)
}
return f.defaultValue
}
return strings.ToUpper(f.name)
}
func (f *FlagClause) init() error {
if f.required && f.defaultValue != "" {
return fmt.Errorf("required flag '--%s' with default value that will never be used", f.name)
}
if f.value == nil {
return fmt.Errorf("no type defined for --%s (eg. .String())", f.name)
}
if f.envar != "" {
if v := os.Getenv(f.envar); v != "" {
f.defaultValue = v
}
}
return nil
}
// Dispatch to the given function after the flag is parsed and validated.
func (f *FlagClause) Action(action Action) *FlagClause {
f.addAction(action)
return f
}
func (f *FlagClause) PreAction(action Action) *FlagClause {
f.addPreAction(action)
return f
}
// Default value for this flag. It *must* be parseable by the value of the flag.
func (f *FlagClause) Default(value string) *FlagClause {
f.defaultValue = value
return f
}
// OverrideDefaultFromEnvar overrides the default value for a flag from an
// environment variable, if available.
func (f *FlagClause) OverrideDefaultFromEnvar(envar string) *FlagClause {
f.envar = envar
return f
}
// PlaceHolder sets the place-holder string used for flag values in the help. The
// default behaviour is to use the value provided by Default() if provided,
// then fall back on the capitalized flag name.
func (f *FlagClause) PlaceHolder(placeholder string) *FlagClause {
f.placeholder = placeholder
return f
}
// Hidden hides a flag from usage but still allows it to be used.
func (f *FlagClause) Hidden() *FlagClause {
f.hidden = true
return f
}
// Required makes the flag required. You can not provide a Default() value to a Required() flag.
func (f *FlagClause) Required() *FlagClause {
f.required = true
return f
}
// Short sets the short flag name.
func (f *FlagClause) Short(name byte) *FlagClause {
f.shorthand = name
return f
}
// Bool makes this flag a boolean flag.
func (f *FlagClause) Bool() (target *bool) {
target = new(bool)
f.SetValue(newBoolValue(target))
return
}

109
vendor/github.com/alecthomas/kingpin/flags_test.go generated vendored Normal file
View File

@ -0,0 +1,109 @@
package kingpin
import (
"io/ioutil"
"os"
"github.com/stretchr/testify/assert"
"testing"
)
func TestBool(t *testing.T) {
app := New("test", "")
b := app.Flag("b", "").Bool()
_, err := app.Parse([]string{"--b"})
assert.NoError(t, err)
assert.True(t, *b)
}
func TestNoBool(t *testing.T) {
fg := newFlagGroup()
f := fg.Flag("b", "").Default("true")
b := f.Bool()
fg.init()
tokens := tokenize([]string{"--no-b"}, false)
_, err := fg.parse(tokens)
assert.NoError(t, err)
assert.False(t, *b)
}
func TestNegateNonBool(t *testing.T) {
fg := newFlagGroup()
f := fg.Flag("b", "")
f.Int()
fg.init()
tokens := tokenize([]string{"--no-b"}, false)
_, err := fg.parse(tokens)
assert.Error(t, err)
}
func TestInvalidFlagDefaultCanBeOverridden(t *testing.T) {
app := New("test", "")
app.Flag("a", "").Default("invalid").Bool()
_, err := app.Parse([]string{})
assert.Error(t, err)
}
func TestRequiredFlag(t *testing.T) {
app := New("test", "")
app.Version("0.0.0").Writer(ioutil.Discard)
exits := 0
app.Terminate(func(int) { exits++ })
app.Flag("a", "").Required().Bool()
_, err := app.Parse([]string{"--a"})
assert.NoError(t, err)
_, err = app.Parse([]string{})
assert.Error(t, err)
_, err = app.Parse([]string{"--version"})
assert.Equal(t, 1, exits)
}
func TestShortFlag(t *testing.T) {
app := New("test", "")
f := app.Flag("long", "").Short('s').Bool()
_, err := app.Parse([]string{"-s"})
assert.NoError(t, err)
assert.True(t, *f)
}
func TestCombinedShortFlags(t *testing.T) {
app := New("test", "")
a := app.Flag("short0", "").Short('0').Bool()
b := app.Flag("short1", "").Short('1').Bool()
c := app.Flag("short2", "").Short('2').Bool()
_, err := app.Parse([]string{"-01"})
assert.NoError(t, err)
assert.True(t, *a)
assert.True(t, *b)
assert.False(t, *c)
}
func TestCombinedShortFlagArg(t *testing.T) {
a := New("test", "")
n := a.Flag("short", "").Short('s').Int()
_, err := a.Parse([]string{"-s10"})
assert.NoError(t, err)
assert.Equal(t, 10, *n)
}
func TestEmptyShortFlagIsAnError(t *testing.T) {
_, err := New("test", "").Parse([]string{"-"})
assert.Error(t, err)
}
func TestRequiredWithEnvarMissingErrors(t *testing.T) {
app := New("test", "")
app.Flag("t", "").OverrideDefaultFromEnvar("TEST_ENVAR").Required().Int()
_, err := app.Parse([]string{})
assert.Error(t, err)
}
func TestRequiredWithEnvar(t *testing.T) {
os.Setenv("TEST_ENVAR", "123")
app := New("test", "")
flag := app.Flag("t", "").OverrideDefaultFromEnvar("TEST_ENVAR").Required().Int()
_, err := app.Parse([]string{})
assert.NoError(t, err)
assert.Equal(t, 123, *flag)
}

88
vendor/github.com/alecthomas/kingpin/global.go generated vendored Normal file
View File

@ -0,0 +1,88 @@
package kingpin
import (
"os"
"path/filepath"
)
var (
// CommandLine is the default Kingpin parser.
CommandLine = New(filepath.Base(os.Args[0]), "")
)
// Command adds a new command to the default parser.
func Command(name, help string) *CmdClause {
return CommandLine.Command(name, help)
}
// Flag adds a new flag to the default parser.
func Flag(name, help string) *FlagClause {
return CommandLine.Flag(name, help)
}
// Arg adds a new argument to the top-level of the default parser.
func Arg(name, help string) *ArgClause {
return CommandLine.Arg(name, help)
}
// Parse and return the selected command. Will call the termination handler if
// an error is encountered.
func Parse() string {
selected := MustParse(CommandLine.Parse(os.Args[1:]))
if selected == "" && CommandLine.cmdGroup.have() {
Usage()
CommandLine.terminate(0)
}
return selected
}
// Errorf prints an error message to stderr.
func Errorf(format string, args ...interface{}) {
CommandLine.Errorf(format, args...)
}
// Fatalf prints an error message to stderr and exits.
func Fatalf(format string, args ...interface{}) {
CommandLine.Fatalf(format, args...)
}
// FatalIfError prints an error and exits if err is not nil. The error is printed
// with the given prefix.
func FatalIfError(err error, format string, args ...interface{}) {
CommandLine.FatalIfError(err, format, args...)
}
// FatalUsage prints an error message followed by usage information, then
// exits with a non-zero status.
func FatalUsage(format string, args ...interface{}) {
CommandLine.FatalUsage(format, args...)
}
// FatalUsageContext writes a printf formatted error message to stderr, then
// usage information for the given ParseContext, before exiting.
func FatalUsageContext(context *ParseContext, format string, args ...interface{}) {
CommandLine.FatalUsageContext(context, format, args...)
}
// Usage prints usage to stderr.
func Usage() {
CommandLine.Usage(os.Args[1:])
}
// Set global usage template to use (defaults to DefaultUsageTemplate).
func UsageTemplate(template string) *Application {
return CommandLine.UsageTemplate(template)
}
// MustParse can be used with app.Parse(args) to exit with an error if parsing fails.
func MustParse(command string, err error) string {
if err != nil {
Fatalf("%s, try --help", err)
}
return command
}
// Version adds a flag for displaying the application version number.
func Version(version string) *Application {
return CommandLine.Version(version)
}

9
vendor/github.com/alecthomas/kingpin/guesswidth.go generated vendored Normal file
View File

@ -0,0 +1,9 @@
// +build !linux,!freebsd,!darwin,!dragonfly,!netbsd,!openbsd
package kingpin
import "io"
func guessWidth(w io.Writer) int {
return 80
}

View File

@ -0,0 +1,38 @@
// +build linux freebsd darwin dragonfly netbsd openbsd
package kingpin
import (
"io"
"os"
"strconv"
"syscall"
"unsafe"
)
func guessWidth(w io.Writer) int {
// check if COLUMNS env is set to comply with
// http://pubs.opengroup.org/onlinepubs/009604499/basedefs/xbd_chap08.html
colsStr := os.Getenv("COLUMNS")
if colsStr != "" {
if cols, err := strconv.Atoi(colsStr); err == nil {
return cols
}
}
if t, ok := w.(*os.File); ok {
fd := t.Fd()
var dimensions [4]uint16
if _, _, err := syscall.Syscall6(
syscall.SYS_IOCTL,
uintptr(fd),
uintptr(syscall.TIOCGWINSZ),
uintptr(unsafe.Pointer(&dimensions)),
0, 0, 0,
); err == 0 {
return int(dimensions[1])
}
}
return 80
}

219
vendor/github.com/alecthomas/kingpin/model.go generated vendored Normal file
View File

@ -0,0 +1,219 @@
package kingpin
import (
"fmt"
"strconv"
"strings"
)
// Data model for Kingpin command-line structure.
type FlagGroupModel struct {
Flags []*FlagModel
}
func (f *FlagGroupModel) FlagSummary() string {
out := []string{}
count := 0
for _, flag := range f.Flags {
if flag.Name != "help" {
count++
}
if flag.Required {
if flag.IsBoolFlag() {
out = append(out, fmt.Sprintf("--[no-]%s", flag.Name))
} else {
out = append(out, fmt.Sprintf("--%s=%s", flag.Name, flag.FormatPlaceHolder()))
}
}
}
if count != len(out) {
out = append(out, "[<flags>]")
}
return strings.Join(out, " ")
}
type FlagModel struct {
Name string
Help string
Short rune
Default string
Envar string
PlaceHolder string
Required bool
Hidden bool
Value Value
}
func (f *FlagModel) String() string {
return f.Value.String()
}
func (f *FlagModel) IsBoolFlag() bool {
if fl, ok := f.Value.(boolFlag); ok {
return fl.IsBoolFlag()
}
return false
}
func (f *FlagModel) FormatPlaceHolder() string {
if f.PlaceHolder != "" {
return f.PlaceHolder
}
if f.Default != "" {
if _, ok := f.Value.(*stringValue); ok {
return strconv.Quote(f.Default)
}
return f.Default
}
return strings.ToUpper(f.Name)
}
type ArgGroupModel struct {
Args []*ArgModel
}
func (a *ArgGroupModel) ArgSummary() string {
depth := 0
out := []string{}
for _, arg := range a.Args {
h := "<" + arg.Name + ">"
if !arg.Required {
h = "[" + h
depth++
}
out = append(out, h)
}
out[len(out)-1] = out[len(out)-1] + strings.Repeat("]", depth)
return strings.Join(out, " ")
}
type ArgModel struct {
Name string
Help string
Default string
Required bool
Value Value
}
func (a *ArgModel) String() string {
return a.Value.String()
}
type CmdGroupModel struct {
Commands []*CmdModel
}
func (c *CmdGroupModel) FlattenedCommands() (out []*CmdModel) {
for _, cmd := range c.Commands {
if len(cmd.Commands) == 0 {
out = append(out, cmd)
}
out = append(out, cmd.FlattenedCommands()...)
}
return
}
type CmdModel struct {
Name string
Help string
FullCommand string
Depth int
Hidden bool
Default bool
*FlagGroupModel
*ArgGroupModel
*CmdGroupModel
}
func (c *CmdModel) String() string {
return c.FullCommand
}
type ApplicationModel struct {
Name string
Help string
Version string
Author string
*ArgGroupModel
*CmdGroupModel
*FlagGroupModel
}
func (a *Application) Model() *ApplicationModel {
return &ApplicationModel{
Name: a.Name,
Help: a.Help,
Version: a.version,
Author: a.author,
FlagGroupModel: a.flagGroup.Model(),
ArgGroupModel: a.argGroup.Model(),
CmdGroupModel: a.cmdGroup.Model(),
}
}
func (a *argGroup) Model() *ArgGroupModel {
m := &ArgGroupModel{}
for _, arg := range a.args {
m.Args = append(m.Args, arg.Model())
}
return m
}
func (a *ArgClause) Model() *ArgModel {
return &ArgModel{
Name: a.name,
Help: a.help,
Default: a.defaultValue,
Required: a.required,
Value: a.value,
}
}
func (f *flagGroup) Model() *FlagGroupModel {
m := &FlagGroupModel{}
for _, fl := range f.flagOrder {
m.Flags = append(m.Flags, fl.Model())
}
return m
}
func (f *FlagClause) Model() *FlagModel {
return &FlagModel{
Name: f.name,
Help: f.help,
Short: rune(f.shorthand),
Default: f.defaultValue,
Envar: f.envar,
PlaceHolder: f.placeholder,
Required: f.required,
Hidden: f.hidden,
Value: f.value,
}
}
func (c *cmdGroup) Model() *CmdGroupModel {
m := &CmdGroupModel{}
for _, cm := range c.commandOrder {
m.Commands = append(m.Commands, cm.Model())
}
return m
}
func (c *CmdClause) Model() *CmdModel {
depth := 0
for i := c; i != nil; i = i.parent {
depth++
}
return &CmdModel{
Name: c.name,
Help: c.help,
Depth: depth,
Hidden: c.hidden,
Default: c.isDefault,
FullCommand: c.FullCommand(),
FlagGroupModel: c.flagGroup.Model(),
ArgGroupModel: c.argGroup.Model(),
CmdGroupModel: c.cmdGroup.Model(),
}
}

375
vendor/github.com/alecthomas/kingpin/parser.go generated vendored Normal file
View File

@ -0,0 +1,375 @@
package kingpin
import (
"bufio"
"fmt"
"os"
"strings"
)
type TokenType int
// Token types.
const (
TokenShort TokenType = iota
TokenLong
TokenArg
TokenError
TokenEOL
)
func (t TokenType) String() string {
switch t {
case TokenShort:
return "short flag"
case TokenLong:
return "long flag"
case TokenArg:
return "argument"
case TokenError:
return "error"
case TokenEOL:
return "<EOL>"
}
return "?"
}
var (
TokenEOLMarker = Token{-1, TokenEOL, ""}
)
type Token struct {
Index int
Type TokenType
Value string
}
func (t *Token) Equal(o *Token) bool {
return t.Index == o.Index
}
func (t *Token) IsFlag() bool {
return t.Type == TokenShort || t.Type == TokenLong
}
func (t *Token) IsEOF() bool {
return t.Type == TokenEOL
}
func (t *Token) String() string {
switch t.Type {
case TokenShort:
return "-" + t.Value
case TokenLong:
return "--" + t.Value
case TokenArg:
return t.Value
case TokenError:
return "error: " + t.Value
case TokenEOL:
return "<EOL>"
default:
panic("unhandled type")
}
}
// A union of possible elements in a parse stack.
type ParseElement struct {
// Clause is either *CmdClause, *ArgClause or *FlagClause.
Clause interface{}
// Value is corresponding value for an ArgClause or FlagClause (if any).
Value *string
}
// ParseContext holds the current context of the parser. When passed to
// Action() callbacks Elements will be fully populated with *FlagClause,
// *ArgClause and *CmdClause values and their corresponding arguments (if
// any).
type ParseContext struct {
SelectedCommand *CmdClause
ignoreDefault bool
argsOnly bool
peek []*Token
argi int // Index of current command-line arg we're processing.
args []string
flags *flagGroup
arguments *argGroup
argumenti int // Cursor into arguments
// Flags, arguments and commands encountered and collected during parse.
Elements []*ParseElement
}
func (p *ParseContext) nextArg() *ArgClause {
if p.argumenti >= len(p.arguments.args) {
return nil
}
arg := p.arguments.args[p.argumenti]
if !arg.consumesRemainder() {
p.argumenti++
}
return arg
}
func (p *ParseContext) next() {
p.argi++
p.args = p.args[1:]
}
// HasTrailingArgs returns true if there are unparsed command-line arguments.
// This can occur if the parser can not match remaining arguments.
func (p *ParseContext) HasTrailingArgs() bool {
return len(p.args) > 0
}
func tokenize(args []string, ignoreDefault bool) *ParseContext {
return &ParseContext{
ignoreDefault: ignoreDefault,
args: args,
flags: newFlagGroup(),
arguments: newArgGroup(),
}
}
func (p *ParseContext) mergeFlags(flags *flagGroup) {
for _, flag := range flags.flagOrder {
if flag.shorthand != 0 {
p.flags.short[string(flag.shorthand)] = flag
}
p.flags.long[flag.name] = flag
p.flags.flagOrder = append(p.flags.flagOrder, flag)
}
}
func (p *ParseContext) mergeArgs(args *argGroup) {
for _, arg := range args.args {
p.arguments.args = append(p.arguments.args, arg)
}
}
func (p *ParseContext) EOL() bool {
return p.Peek().Type == TokenEOL
}
// Next token in the parse context.
func (p *ParseContext) Next() *Token {
if len(p.peek) > 0 {
return p.pop()
}
// End of tokens.
if len(p.args) == 0 {
return &Token{Index: p.argi, Type: TokenEOL}
}
arg := p.args[0]
p.next()
if p.argsOnly {
return &Token{p.argi, TokenArg, arg}
}
// All remaining args are passed directly.
if arg == "--" {
p.argsOnly = true
return p.Next()
}
if strings.HasPrefix(arg, "--") {
parts := strings.SplitN(arg[2:], "=", 2)
token := &Token{p.argi, TokenLong, parts[0]}
if len(parts) == 2 {
p.Push(&Token{p.argi, TokenArg, parts[1]})
}
return token
}
if strings.HasPrefix(arg, "-") {
if len(arg) == 1 {
return &Token{Index: p.argi, Type: TokenShort}
}
short := arg[1:2]
flag, ok := p.flags.short[short]
// Not a known short flag, we'll just return it anyway.
if !ok {
} else if fb, ok := flag.value.(boolFlag); ok && fb.IsBoolFlag() {
// Bool short flag.
} else {
// Short flag with combined argument: -fARG
token := &Token{p.argi, TokenShort, short}
if len(arg) > 2 {
p.Push(&Token{p.argi, TokenArg, arg[2:]})
}
return token
}
if len(arg) > 2 {
p.args = append([]string{"-" + arg[2:]}, p.args...)
}
return &Token{p.argi, TokenShort, short}
} else if strings.HasPrefix(arg, "@") {
expanded, err := ExpandArgsFromFile(arg[1:])
if err != nil {
return &Token{p.argi, TokenError, err.Error()}
}
if p.argi >= len(p.args) {
p.args = append(p.args[:p.argi-1], expanded...)
} else {
p.args = append(p.args[:p.argi-1], append(expanded, p.args[p.argi+1:]...)...)
}
return p.Next()
}
return &Token{p.argi, TokenArg, arg}
}
func (p *ParseContext) Peek() *Token {
if len(p.peek) == 0 {
return p.Push(p.Next())
}
return p.peek[len(p.peek)-1]
}
func (p *ParseContext) Push(token *Token) *Token {
p.peek = append(p.peek, token)
return token
}
func (p *ParseContext) pop() *Token {
end := len(p.peek) - 1
token := p.peek[end]
p.peek = p.peek[0:end]
return token
}
func (p *ParseContext) String() string {
return p.SelectedCommand.FullCommand()
}
func (p *ParseContext) matchedFlag(flag *FlagClause, value string) {
p.Elements = append(p.Elements, &ParseElement{Clause: flag, Value: &value})
}
func (p *ParseContext) matchedArg(arg *ArgClause, value string) {
p.Elements = append(p.Elements, &ParseElement{Clause: arg, Value: &value})
}
func (p *ParseContext) matchedCmd(cmd *CmdClause) {
p.Elements = append(p.Elements, &ParseElement{Clause: cmd})
p.mergeFlags(cmd.flagGroup)
p.mergeArgs(cmd.argGroup)
p.SelectedCommand = cmd
}
// Expand arguments from a file. Lines starting with # will be treated as comments.
func ExpandArgsFromFile(filename string) (out []string, err error) {
r, err := os.Open(filename)
if err != nil {
return nil, err
}
defer r.Close()
scanner := bufio.NewScanner(r)
for scanner.Scan() {
line := scanner.Text()
if strings.HasPrefix(line, "#") {
continue
}
out = append(out, line)
}
err = scanner.Err()
return
}
func parse(context *ParseContext, app *Application) (err error) {
context.mergeFlags(app.flagGroup)
context.mergeArgs(app.argGroup)
cmds := app.cmdGroup
ignoreDefault := context.ignoreDefault
loop:
for !context.EOL() {
token := context.Peek()
switch token.Type {
case TokenLong, TokenShort:
if flag, err := context.flags.parse(context); err != nil {
if !ignoreDefault {
if cmd := cmds.defaultSubcommand(); cmd != nil {
context.matchedCmd(cmd)
cmds = cmd.cmdGroup
break
}
}
return err
} else if flag == HelpFlag {
ignoreDefault = true
}
case TokenArg:
if cmds.have() {
selectedDefault := false
cmd, ok := cmds.commands[token.String()]
if !ok {
if !ignoreDefault {
if cmd = cmds.defaultSubcommand(); cmd != nil {
fmt.Println("defaulted")
selectedDefault = true
}
}
if cmd == nil {
return fmt.Errorf("expected command but got %q", token)
}
}
if cmd == HelpCommand {
ignoreDefault = true
}
context.matchedCmd(cmd)
cmds = cmd.cmdGroup
if !selectedDefault {
context.Next()
}
} else if context.arguments.have() {
if app.noInterspersed {
// no more flags
context.argsOnly = true
}
arg := context.nextArg()
if arg == nil {
break loop
}
context.matchedArg(arg, token.String())
context.Next()
} else {
break loop
}
case TokenEOL:
break loop
}
}
// Move to innermost default command.
for !ignoreDefault {
if cmd := cmds.defaultSubcommand(); cmd != nil {
context.matchedCmd(cmd)
cmds = cmd.cmdGroup
} else {
break
}
}
if !context.EOL() {
return fmt.Errorf("unexpected %s", context.Peek())
}
// Set defaults for all remaining args.
for arg := context.nextArg(); arg != nil && !arg.consumesRemainder(); arg = context.nextArg() {
if arg.defaultValue != "" {
if err := arg.value.Set(arg.defaultValue); err != nil {
return fmt.Errorf("invalid default value '%s' for argument '%s'", arg.defaultValue, arg.name)
}
}
}
return
}

42
vendor/github.com/alecthomas/kingpin/parser_test.go generated vendored Normal file
View File

@ -0,0 +1,42 @@
package kingpin
import (
"io/ioutil"
"os"
"testing"
"github.com/stretchr/testify/assert"
)
func TestParserExpandFromFile(t *testing.T) {
f, err := ioutil.TempFile("", "")
assert.NoError(t, err)
defer os.Remove(f.Name())
f.WriteString("hello\nworld\n")
f.Close()
app := New("test", "")
arg0 := app.Arg("arg0", "").String()
arg1 := app.Arg("arg1", "").String()
_, err = app.Parse([]string{"@" + f.Name()})
assert.NoError(t, err)
assert.Equal(t, "hello", *arg0)
assert.Equal(t, "world", *arg1)
}
func TestParseContextPush(t *testing.T) {
app := New("test", "")
app.Command("foo", "").Command("bar", "")
c := tokenize([]string{"foo", "bar"}, false)
a := c.Next()
assert.Equal(t, TokenArg, a.Type)
b := c.Next()
assert.Equal(t, TokenArg, b.Type)
c.Push(b)
c.Push(a)
a = c.Next()
assert.Equal(t, "foo", a.Value)
b = c.Next()
assert.Equal(t, "bar", b.Value)
}

Some files were not shown because too many files have changed in this diff Show More