mirror of
https://github.com/GNS3/gns3-server
synced 2024-11-24 09:18:08 +00:00
Merge 2.1 into 2.2 branch
This commit is contained in:
parent
2a5f3221b9
commit
658aa4bae9
@ -5,7 +5,6 @@ FROM ubuntu:16.04
|
|||||||
ENV DEBIAN_FRONTEND noninteractive
|
ENV DEBIAN_FRONTEND noninteractive
|
||||||
|
|
||||||
# Set the locale
|
# Set the locale
|
||||||
RUN locale-gen en_US.UTF-8
|
|
||||||
ENV LANG en_US.UTF-8
|
ENV LANG en_US.UTF-8
|
||||||
ENV LANGUAGE en_US:en
|
ENV LANGUAGE en_US:en
|
||||||
ENV LC_ALL en_US.UTF-8
|
ENV LC_ALL en_US.UTF-8
|
||||||
@ -13,6 +12,7 @@ ENV LC_ALL en_US.UTF-8
|
|||||||
RUN apt-get update && apt-get install -y software-properties-common
|
RUN apt-get update && apt-get install -y software-properties-common
|
||||||
RUN add-apt-repository ppa:gns3/ppa
|
RUN add-apt-repository ppa:gns3/ppa
|
||||||
RUN apt-get update && apt-get install -y \
|
RUN apt-get update && apt-get install -y \
|
||||||
|
locales \
|
||||||
python3-pip \
|
python3-pip \
|
||||||
python3-dev \
|
python3-dev \
|
||||||
qemu-system-x86 \
|
qemu-system-x86 \
|
||||||
@ -21,6 +21,8 @@ RUN apt-get update && apt-get install -y \
|
|||||||
libvirt-bin \
|
libvirt-bin \
|
||||||
x11vnc
|
x11vnc
|
||||||
|
|
||||||
|
RUN locale-gen en_US.UTF-8
|
||||||
|
|
||||||
# Install uninstall to install dependencies
|
# Install uninstall to install dependencies
|
||||||
RUN apt-get install -y vpcs ubridge
|
RUN apt-get install -y vpcs ubridge
|
||||||
|
|
||||||
|
102
docs/curl.rst
102
docs/curl.rst
@ -1,30 +1,34 @@
|
|||||||
Sample session using curl
|
Sample sessions using curl
|
||||||
=========================
|
==========================
|
||||||
|
|
||||||
You need to read the :doc:`glossary`, and :doc:`general` before.
|
Read the :doc:`glossary`, and :doc:`general` pages first.
|
||||||
|
|
||||||
Full endpoints list is available: :doc:`endpoints`
|
A list of all endpoints is available in :doc:`endpoints`
|
||||||
|
|
||||||
.. warning::
|
.. warning::
|
||||||
|
|
||||||
Beware the output of this sample is truncated in order
|
Note that the output of the samples can be truncated in
|
||||||
to simplify the understanding. Please read the
|
order to simplify their understanding. Please read the
|
||||||
documentation for the exact output.
|
documentation for the exact output meaning.
|
||||||
|
|
||||||
You can check the server version with a simple curl command:
|
Server version
|
||||||
|
###############
|
||||||
|
|
||||||
|
Check the server version with a simple curl command:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
# curl "http://localhost:3080/v2/version"
|
# curl "http://localhost:3080/v2/version"
|
||||||
{
|
{
|
||||||
"version": "2.0.0dev1"
|
"local": false,
|
||||||
|
"version": "2.1.4"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
List computes
|
List computes
|
||||||
##############
|
##############
|
||||||
|
|
||||||
We will list the computes node where we can run our nodes:
|
List all the compute servers:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -34,20 +38,20 @@ We will list the computes node where we can run our nodes:
|
|||||||
"compute_id": "local",
|
"compute_id": "local",
|
||||||
"connected": true,
|
"connected": true,
|
||||||
"host": "127.0.0.1",
|
"host": "127.0.0.1",
|
||||||
"name": "Local",
|
"name": "local",
|
||||||
"port": 3080,
|
"port": 3080,
|
||||||
"protocol": "http",
|
"protocol": "http",
|
||||||
"user": "admin"
|
"user": "admin"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
In this sample we have only one compute where we can run our nodes. This compute as a special id: local. This
|
There is only one compute server where nodes can be run in this example.
|
||||||
mean it's the local server embed in the GNS3 controller.
|
This compute as a special id: local, this is the local server which is embedded in the GNS3 controller.
|
||||||
|
|
||||||
Create project
|
Create a project
|
||||||
###############
|
#################
|
||||||
|
|
||||||
The next step is to create a project.
|
The next step is to create a project:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -60,7 +64,7 @@ The next step is to create a project.
|
|||||||
Create nodes
|
Create nodes
|
||||||
#############
|
#############
|
||||||
|
|
||||||
With this project id we can now create two VPCS Node.
|
Using the project id, it is now possible to create two VPCS nodes:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -87,15 +91,14 @@ With this project id we can now create two VPCS Node.
|
|||||||
"node_id": "83892a4d-aea0-4350-8b3e-d0af3713da74",
|
"node_id": "83892a4d-aea0-4350-8b3e-d0af3713da74",
|
||||||
"node_type": "vpcs",
|
"node_type": "vpcs",
|
||||||
"project_id": "b8c070f7-f34c-4b7b-ba6f-be3d26ed073f",
|
"project_id": "b8c070f7-f34c-4b7b-ba6f-be3d26ed073f",
|
||||||
|
"properties": {},
|
||||||
"status": "stopped"
|
"status": "stopped"
|
||||||
}
|
}
|
||||||
|
|
||||||
The properties dictionnary contains all setting specific to a node type (dynamips, docker, vpcs...)
|
|
||||||
|
|
||||||
Link nodes
|
Link nodes
|
||||||
###########
|
###########
|
||||||
|
|
||||||
Now we need to link the two VPCS by connecting their port 0 together.
|
The two VPCS nodes can be linked together using their port number 0 (VPCS has only one network adapter with one port):
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -123,7 +126,7 @@ Now we need to link the two VPCS by connecting their port 0 together.
|
|||||||
Start nodes
|
Start nodes
|
||||||
###########
|
###########
|
||||||
|
|
||||||
Now we can start the two nodes.
|
Start the two nodes:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -133,8 +136,8 @@ Now we can start the two nodes.
|
|||||||
Connect to nodes
|
Connect to nodes
|
||||||
#################
|
#################
|
||||||
|
|
||||||
Everything should be started now. You can connect via telnet to the different Node.
|
Use a Telnet client to connect to the nodes once they have been started.
|
||||||
The port is the field console in the create Node request.
|
The port number can be found in the output when the nodes have been created above.
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -200,7 +203,7 @@ The port is the field console in the create Node request.
|
|||||||
Stop nodes
|
Stop nodes
|
||||||
##########
|
##########
|
||||||
|
|
||||||
And we stop the two nodes.
|
Stop the two nodes:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -208,40 +211,41 @@ And we stop the two nodes.
|
|||||||
# curl -X POST "http://localhost:3080/v2/projects/b8c070f7-f34c-4b7b-ba6f-be3d26ed073f/nodes/83892a4d-aea0-4350-8b3e-d0af3713da74/stop" -d "{}"
|
# curl -X POST "http://localhost:3080/v2/projects/b8c070f7-f34c-4b7b-ba6f-be3d26ed073f/nodes/83892a4d-aea0-4350-8b3e-d0af3713da74/stop" -d "{}"
|
||||||
|
|
||||||
|
|
||||||
Add a visual element
|
Add visual elements
|
||||||
######################
|
####################
|
||||||
|
|
||||||
When you want add visual elements to the topology like rectangle, circle, images you can just send a raw SVG.
|
Visual elements like rectangle, ellipses or images in the form of raw SVG can be added to a project.
|
||||||
This will display a red square in the middle of your topologies:
|
|
||||||
|
|
||||||
|
This will display a red square in the middle of your canvas:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
# curl -X POST "http://localhost:3080/v2/projects/b8c070f7-f34c-4b7b-ba6f-be3d26ed073f/drawings" -d '{"x":0, "y": 12, "svg": "<svg width=\"50\" height=\"50\"><rect width=\"50\" height=\"50\" style=\"fill: #ff0000\"></rect></svg>"}'
|
# curl -X POST "http://localhost:3080/v2/projects/b8c070f7-f34c-4b7b-ba6f-be3d26ed073f/drawings" -d '{"x":0, "y": 12, "svg": "<svg width=\"50\" height=\"50\"><rect width=\"50\" height=\"50\" style=\"fill: #ff0000\"></rect></svg>"}'
|
||||||
|
|
||||||
Tips: you can embed png/jpg... by using a base64 encoding in the SVG.
|
Tip: embed PNG, JPEG etc. images using base64 encoding in the SVG.
|
||||||
|
|
||||||
|
|
||||||
Add filter to the link
|
Add a packet filter
|
||||||
######################
|
####################
|
||||||
|
|
||||||
Filter allow you to add error on a link.
|
Packet filters allow to filter packet on a given link. Here to drop a packet every 5 packets:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
curl -X PUT "http://localhost:3080/v2/projects/b8c070f7-f34c-4b7b-ba6f-be3d26ed073f/links/007f2177-6790-4e1b-ac28-41fa226b2a06" -d '{"filters": {"frequency_drop": [5]}}'
|
curl -X PUT "http://localhost:3080/v2/projects/b8c070f7-f34c-4b7b-ba6f-be3d26ed073f/links/007f2177-6790-4e1b-ac28-41fa226b2a06" -d '{"filters": {"frequency_drop": [5]}}'
|
||||||
|
|
||||||
|
|
||||||
Creation of nodes
|
Node creation
|
||||||
#################
|
##############
|
||||||
|
|
||||||
Their is two way of adding nodes. Manual by passing all the information require for a Node.
|
There are two ways to add nodes.
|
||||||
|
|
||||||
Or by using an appliance. The appliance is a node model saved in your server.
|
1. Manually by passing all the information required to create a new node.
|
||||||
|
2. Using an appliance template stored on your server.
|
||||||
|
|
||||||
Using an appliance
|
Using an appliance template
|
||||||
------------------
|
---------------------------
|
||||||
|
|
||||||
First you need to list the available appliances
|
List all the available appliance templates:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -268,15 +272,15 @@ First you need to list the available appliances
|
|||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
Now you can use the appliance and put it at a specific position
|
Use the appliance template and add coordinates to select where the node will be put on the canvas:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
# curl -X POST http://localhost:3080/v2/projects/b8c070f7-f34c-4b7b-ba6f-be3d26ed073f -d '{"x": 12, "y": 42}'
|
# curl -X POST http://localhost:3080/v2/projects/b8c070f7-f34c-4b7b-ba6f-be3d26ed073f/appliances/9cd59d5a-c70f-4454-8313-6a9e81a8278f -d '{"x": 12, "y": 42}'
|
||||||
|
|
||||||
|
|
||||||
Manual creation of a Qemu node
|
Manual creation of a Qemu node
|
||||||
-------------------------------
|
------------------------------
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -360,7 +364,7 @@ Manual creation of a Qemu node
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
Manual creation of a dynamips node
|
Manual creation of a Dynamips node
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
@ -486,7 +490,7 @@ Manual creation of a dynamips node
|
|||||||
Notifications
|
Notifications
|
||||||
#############
|
#############
|
||||||
|
|
||||||
You can see notification about the changes via the notification feed:
|
Notifications can be seen by connection to the notification feed:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -494,14 +498,14 @@ You can see notification about the changes via the notification feed:
|
|||||||
{"action": "ping", "event": {"compute_id": "local", "cpu_usage_percent": 35.7, "memory_usage_percent": 80.7}}
|
{"action": "ping", "event": {"compute_id": "local", "cpu_usage_percent": 35.7, "memory_usage_percent": 80.7}}
|
||||||
{"action": "node.updated", "event": {"command_line": "/usr/local/bin/vpcs -p 5001 -m 1 -i 1 -F -R -s 10001 -c 10000 -t 127.0.0.1", "compute_id": "local", "console": 5001, "console_host": "127.0.0.1", "console_type": "telnet", "name": "VPCS 2", "node_id": "83892a4d-aea0-4350-8b3e-d0af3713da74", "node_type": "vpcs", "project_id": "b8c070f7-f34c-4b7b-ba6f-be3d26ed073f", "properties": {"startup_script": null, "startup_script_path": null}, "status": "started"}}
|
{"action": "node.updated", "event": {"command_line": "/usr/local/bin/vpcs -p 5001 -m 1 -i 1 -F -R -s 10001 -c 10000 -t 127.0.0.1", "compute_id": "local", "console": 5001, "console_host": "127.0.0.1", "console_type": "telnet", "name": "VPCS 2", "node_id": "83892a4d-aea0-4350-8b3e-d0af3713da74", "node_type": "vpcs", "project_id": "b8c070f7-f34c-4b7b-ba6f-be3d26ed073f", "properties": {"startup_script": null, "startup_script_path": null}, "status": "started"}}
|
||||||
|
|
||||||
A websocket version is also available on http://localhost:3080/v2/projects/b8c070f7-f34c-4b7b-ba6f-be3d26ed073f/notifications/ws
|
A Websocket notification stream is also available on http://localhost:3080/v2/projects/b8c070f7-f34c-4b7b-ba6f-be3d26ed073f/notifications/ws
|
||||||
|
|
||||||
Read :doc:`notifications` for more informations
|
Read :doc:`notifications` for more information.
|
||||||
|
|
||||||
|
|
||||||
How to found the endpoints?
|
Where to find the endpoints?
|
||||||
###########################
|
###########################
|
||||||
|
|
||||||
Full endpoints list is available: :doc:`endpoints`
|
A list of all endpoints is available: :doc:`endpoints`
|
||||||
|
|
||||||
If you start the server with **--debug** you can see all the requests made by the client and by the controller to the computes nodes.
|
Tip: requests made by a client and by a controller to the computes nodes can been seen if the server is started with the **--debug** parameter.
|
||||||
|
@ -4,27 +4,25 @@ Development
|
|||||||
Code convention
|
Code convention
|
||||||
===============
|
===============
|
||||||
|
|
||||||
You should respect all the PEP8 convention except the
|
Respect all the PEP8 convention except the max line length rule.
|
||||||
rule about max line length.
|
|
||||||
|
|
||||||
Source code
|
Source code
|
||||||
===========
|
===========
|
||||||
|
|
||||||
Source code is available on github under GPL V3 licence:
|
Source code is available on Github under the GPL V3 licence:
|
||||||
https://github.com/GNS3/
|
https://github.com/GNS3/
|
||||||
|
|
||||||
The GNS3 server: https://github.com/GNS3/gns3-server
|
The GNS3 server: https://github.com/GNS3/gns3-server
|
||||||
The Qt GUI: https://github.com/GNS3/gns3-gui
|
The GNS3 user interface: https://github.com/GNS3/gns3-gui
|
||||||
|
|
||||||
|
|
||||||
Documentation
|
Documentation
|
||||||
==============
|
==============
|
||||||
|
|
||||||
In the gns3-server project.
|
The documentation can be found in the gns3-server project.
|
||||||
|
|
||||||
Build doc
|
Build doc
|
||||||
----------
|
----------
|
||||||
In the project root folder:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -41,4 +39,3 @@ Run tests
|
|||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
py.test -v
|
py.test -v
|
||||||
|
|
||||||
|
@ -1,21 +1,22 @@
|
|||||||
Endpoints
|
Endpoints
|
||||||
------------
|
------------
|
||||||
|
|
||||||
GNS3 expose two type of endpoints:
|
GNS3 exposes two type of endpoints:
|
||||||
|
|
||||||
* Controller
|
* Controller endpoints
|
||||||
* Compute
|
* Compute endpoints
|
||||||
|
|
||||||
Controller API Endpoints
|
Controller endpoints
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The controller manage all the running topologies. The controller
|
The controller manages everything, it is the central decision point
|
||||||
has knowledge of everything on in GNS3. If you want to create and
|
and has a complete view of your network topologies, what nodes run on
|
||||||
manage a topology it's here. The controller will call the compute API
|
which compute server, the links between them etc.
|
||||||
when needed.
|
|
||||||
|
|
||||||
In a standard GNS3 installation you have one controller and one or many
|
This is the high level API which can be used by users to manually control
|
||||||
computes.
|
the GNS3 backend. The controller will call the compute endpoints when needed.
|
||||||
|
|
||||||
|
A standard GNS3 setup is to have one controller and one or many computes.
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:glob:
|
:glob:
|
||||||
@ -24,14 +25,15 @@ computes.
|
|||||||
api/v2/controller/*
|
api/v2/controller/*
|
||||||
|
|
||||||
|
|
||||||
Compute API Endpoints
|
Compute Endpoints
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The compute is the GNS3 process running on a server and controlling
|
A compute is the GNS3 process running on a host. It controls emulators in order to run nodes
|
||||||
the VM process.
|
(e.g. VMware VMs with VMware Workstation, IOS routers with Dynamips etc.)
|
||||||
|
|
||||||
.. WARNING::
|
.. WARNING::
|
||||||
Consider this endpoints as a private API used by the controller.
|
These endpoints should be considered low level and private.
|
||||||
|
They should only be used by the controller or for debugging purposes.
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:glob:
|
:glob:
|
||||||
|
@ -1,11 +1,11 @@
|
|||||||
GNS3 file formats
|
The GNS3 files
|
||||||
=================
|
===============
|
||||||
|
|
||||||
The .gns3
|
.gns3 files
|
||||||
##########
|
############
|
||||||
|
|
||||||
It's the topology file of GNS3 this file is a JSON with all
|
GNS3 project files in JSON file format with all
|
||||||
the informations about what is inside the topology.
|
the information necessary to save a project.
|
||||||
|
|
||||||
A minimal version:
|
A minimal version:
|
||||||
|
|
||||||
@ -30,34 +30,34 @@ The revision is the version of file format:
|
|||||||
* 4: GNS3 1.5
|
* 4: GNS3 1.5
|
||||||
* 3: GNS3 1.4
|
* 3: GNS3 1.4
|
||||||
* 2: GNS3 1.3
|
* 2: GNS3 1.3
|
||||||
* 1: GNS3 1.0, 1.1, 1.2 (Not mentionned in the topology file)
|
* 1: GNS3 1.0, 1.1, 1.2 (Not mentioned in the file)
|
||||||
|
|
||||||
And the full JSON schema:
|
The full JSON schema can be found there:
|
||||||
|
|
||||||
.. literalinclude:: gns3_file.json
|
.. literalinclude:: gns3_file.json
|
||||||
|
|
||||||
|
|
||||||
The .net
|
.net files
|
||||||
#########
|
###########
|
||||||
It's topologies made for GNS3 0.8
|
|
||||||
|
Topology files made for GNS3 <= version 1.0. Not supported.
|
||||||
|
|
||||||
|
|
||||||
The .gns3p or .gns3project
|
.gns3p or .gns3project files
|
||||||
###########################
|
|
||||||
|
|
||||||
It's a zipped version of the .gns3 and all files require for
|
|
||||||
a topology. The images could be included inside but are optionnals.
|
|
||||||
|
|
||||||
The zip could be a ZIP64 if the project is too big for standard
|
|
||||||
zip file.
|
|
||||||
|
|
||||||
The .gns3a or .gns3appliance
|
|
||||||
#############################
|
#############################
|
||||||
|
|
||||||
This file contains details on how to import an appliance in GNS3.
|
This this a zipped version of a.gns3 file and includes all the required files to easily share a project.
|
||||||
|
The binary images can optionally be included.
|
||||||
|
|
||||||
A JSON schema is available here:
|
The zip can be a ZIP64 if the project is too big for standard zip file.
|
||||||
|
|
||||||
|
.gns3a or .gns3appliance files
|
||||||
|
##############################
|
||||||
|
|
||||||
|
These files contain everything needed to create a new appliance template in GNS3.
|
||||||
|
|
||||||
|
A JSON schema is available there:
|
||||||
https://github.com/GNS3/gns3-registry/blob/master/schemas/appliance.json
|
https://github.com/GNS3/gns3-registry/blob/master/schemas/appliance.json
|
||||||
|
|
||||||
And samples here:
|
And samples there:
|
||||||
https://github.com/GNS3/gns3-registry/tree/master/appliances
|
https://github.com/GNS3/gns3-registry/tree/master/appliances
|
||||||
|
@ -1,29 +1,27 @@
|
|||||||
General
|
General
|
||||||
################
|
#######
|
||||||
|
|
||||||
Architecture
|
Architecture
|
||||||
============
|
============
|
||||||
|
|
||||||
GNS3 is splitted in four part:
|
GNS3 can be divided in four part:
|
||||||
|
|
||||||
* the GUI (project gns3-gui, gns3-web)
|
* the user interface or GUI (gns3-gui or gns3-web projects)
|
||||||
* the controller (project gns3-server)
|
* the controller (gns3-server project)
|
||||||
* the compute (project gns3-server)
|
* the compute (part of the gns3-server project)
|
||||||
* the emulators (qemu, iou, dynamips...)
|
* the emulators (Qemu, Dynamips, VirtualBox...)
|
||||||
|
|
||||||
|
|
||||||
The controller pilot everything it's the part that manage the state
|
The controller pilots everything, it manages the state
|
||||||
of a project, save it on disk. Only one controller exists.
|
of each project. Only one controller should run.
|
||||||
|
|
||||||
|
The GUI displays a topology representing a project on a canvas and allow to
|
||||||
|
perform actions on given project, sending API requests to the controller.
|
||||||
|
|
||||||
The GUI display the topology. The GUI has only direct contact with
|
The compute controls emulators to run nodes. A compute that is on
|
||||||
the controller.
|
the same server as the controller is the same process.
|
||||||
|
|
||||||
The compute are where emulator are executed. If the compute is on
|
The compute usually starts an emulator instance for each node.
|
||||||
the same server as the controller, they are in the same process.
|
|
||||||
|
|
||||||
|
|
||||||
For each node of the topology will start an emulator instance.
|
|
||||||
|
|
||||||
|
|
||||||
A small schema::
|
A small schema::
|
||||||
@ -42,19 +40,18 @@ A small schema::
|
|||||||
+--------+
|
+--------+
|
||||||
|
|
||||||
|
|
||||||
If you want to pilot GNS3 you need to use the controller API.
|
Use the controller API to work with the GNS3 backend
|
||||||
|
|
||||||
|
|
||||||
Communications
|
Communications
|
||||||
===============
|
==============
|
||||||
|
|
||||||
All the communication are done over HTTP using JSON.
|
All communication are done over HTTP using the JSON format.
|
||||||
|
|
||||||
Errors
|
Errors
|
||||||
======
|
======
|
||||||
|
|
||||||
In case of error a standard HTTP error is raise and you got a
|
A standard HTTP error is sent in case of an error:
|
||||||
JSON like that
|
|
||||||
|
|
||||||
.. code-block:: json
|
.. code-block:: json
|
||||||
|
|
||||||
@ -63,10 +60,6 @@ JSON like that
|
|||||||
"message": "Conflict"
|
"message": "Conflict"
|
||||||
}
|
}
|
||||||
|
|
||||||
409 error could be display to the user. They are normal behavior
|
|
||||||
they are used to warn user about something he should change and
|
|
||||||
they are not an internal software error.
|
|
||||||
|
|
||||||
|
|
||||||
Limitations
|
Limitations
|
||||||
============
|
============
|
||||||
@ -74,37 +67,34 @@ Limitations
|
|||||||
Concurrency
|
Concurrency
|
||||||
------------
|
------------
|
||||||
|
|
||||||
A node can't process multiple request in the same time. But you can make
|
A node cannot processes multiple requests at the same time. However,
|
||||||
multiple request on multiple node. It's transparent for the client
|
multiple requests on multiple nodes can be executed concurrently.
|
||||||
when the first request on a Node start a lock is acquire for this node id
|
This should be transparent for clients since internal locks are used inside the server,
|
||||||
and released for the next request at the end. You can safely send all
|
so it is safe to send multiple requests at the same time and let the server
|
||||||
the requests in the same time and let the server manage an efficent concurrency.
|
manage the concurrency.
|
||||||
|
|
||||||
We think it can be a little slower for some operations, but it's remove a big
|
|
||||||
complexity for the client due to the fact only some command on some node can be
|
|
||||||
concurrent.
|
|
||||||
|
|
||||||
|
|
||||||
Authentication
|
Authentication
|
||||||
-----------------
|
--------------
|
||||||
|
|
||||||
You can use HTTP basic auth to protect the access to the API. And run
|
HTTP basic authentication can be used to prevent unauthorized API requests.
|
||||||
the API over HTTPS.
|
It is recommended to set up a VPN if the communication between clients and the server must be encrypted.
|
||||||
|
|
||||||
|
|
||||||
Notifications
|
Notifications
|
||||||
=============
|
=============
|
||||||
|
|
||||||
|
|
||||||
You can receive notification from the server if you listen the HTTP stream /notifications or the websocket.
|
Notifications can be received from the server by listening to a HTTP stream or via a Websocket.
|
||||||
|
|
||||||
Read :doc:`notifications` for more informations
|
Read :doc:`notifications` for more information
|
||||||
|
|
||||||
Previous versions
|
Previous versions
|
||||||
=================
|
=================
|
||||||
|
|
||||||
API version 1
|
API version 1
|
||||||
-------------
|
-------------
|
||||||
Shipped with GNS3 1.3, 1.4 and 1.5.
|
|
||||||
This API doesn't support the controller system and save used a commit system instead of live save.
|
Shipped with GNS3 1.3, 1.4 and 1.5.
|
||||||
|
This API doesn't support the controller architecture.
|
||||||
|
|
||||||
|
@ -4,44 +4,41 @@ Glossary
|
|||||||
Topology
|
Topology
|
||||||
--------
|
--------
|
||||||
|
|
||||||
The place where you have all things (node, drawing, link...)
|
Contains everything to represent a virtual network (nodes, visual elements, links...)
|
||||||
|
|
||||||
|
|
||||||
Node
|
Node
|
||||||
-----
|
----
|
||||||
|
|
||||||
A Virtual Machine (Dynamips, IOU, Qemu, VPCS...), a cloud, a builtin device (switch, hub...)
|
A Virtual Machine (Dynamips, IOU, Qemu, VPCS...) or builtin node (cloud, switch, hub...) that run on a compute.
|
||||||
|
|
||||||
Appliance
|
Appliance
|
||||||
---------
|
---------
|
||||||
|
|
||||||
A model for a node. When you drag an appliance to the topology a node is created.
|
A model for a node used to create a node. When you drag an appliance to the topology a node is created.
|
||||||
|
|
||||||
|
|
||||||
Appliance template
|
Appliance template
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
A file (.gns3a) use for creating new node model.
|
A file (.gns3a) used to create a new node.
|
||||||
|
|
||||||
|
|
||||||
Drawing
|
Drawing
|
||||||
--------
|
-------
|
||||||
|
|
||||||
Drawing are visual element not used by the network emulation. Like
|
A Drawing is a visual element like annotations, images, rectangles etc. There are pure SVG elements.
|
||||||
text, images, rectangle... They are pure SVG elements.
|
|
||||||
|
|
||||||
Adapter
|
Adapter
|
||||||
-------
|
-------
|
||||||
|
|
||||||
The physical network interface. The adapter can contain multiple ports.
|
A physical network interface, like a PCI card. The adapter can contain multiple ports.
|
||||||
|
|
||||||
Port
|
Port
|
||||||
----
|
----
|
||||||
|
|
||||||
A port is an opening on network adapter that cable plug into.
|
A port is an opening on a network adapter where can be plugged into.
|
||||||
|
|
||||||
For example a VM can have a serial and an ethernet adapter plugged in.
|
For example a VM can have a serial and an Ethernet adapter.
|
||||||
The ethernet adapter can have 4 ports.
|
The Ethernet adapter itself can have 4 ports.
|
||||||
|
|
||||||
Controller
|
Controller
|
||||||
----------
|
----------
|
||||||
@ -50,20 +47,23 @@ The central server managing everything in GNS3. A GNS3 controller
|
|||||||
will manage multiple GNS3 compute node.
|
will manage multiple GNS3 compute node.
|
||||||
|
|
||||||
Compute
|
Compute
|
||||||
----------
|
-------
|
||||||
|
|
||||||
The process running on each server with GNS3. The GNS3 compute node
|
The process running on each server with GNS3. The GNS3 compute node
|
||||||
is controlled by the GNS3 controller.
|
is controlled by the GNS3 controller.
|
||||||
|
|
||||||
Symbol
|
Symbol
|
||||||
------
|
------
|
||||||
Symbol are the icon used for nodes.
|
|
||||||
|
A symbol is an icon used to represent a node on a scene.
|
||||||
|
|
||||||
Scene
|
Scene
|
||||||
-----
|
-----
|
||||||
The drawing area
|
|
||||||
|
A scene is the drawing area or canvas.
|
||||||
|
|
||||||
|
|
||||||
Filter
|
Filter
|
||||||
------
|
------
|
||||||
Packet filter this allow to add latency or packet drop.
|
|
||||||
|
Packet filter, for instance to add latency on a link or drop packets
|
||||||
|
@ -2,17 +2,13 @@ Welcome to API documentation!
|
|||||||
======================================
|
======================================
|
||||||
|
|
||||||
.. WARNING::
|
.. WARNING::
|
||||||
This documentation are for developers for user documentation go
|
This documentation is intended for developers. The user documentation is
|
||||||
to https://gns3.com/
|
available on https://gns3.com/
|
||||||
|
|
||||||
The API is not stable, feel free to post comments on our website
|
|
||||||
https://gns3.com/
|
|
||||||
|
|
||||||
|
|
||||||
This documentation cover the GNS3 API and ressources for GNS3 developers.
|
This documentation describe the GNS3 API and provide information for GNS3 developers.
|
||||||
|
|
||||||
|
For a quick demo on how to use the API read: :doc:`curl`
|
||||||
If you want a quick demo on how to use the API read: :doc:`curl`
|
|
||||||
|
|
||||||
API
|
API
|
||||||
----
|
----
|
||||||
@ -26,8 +22,8 @@ API
|
|||||||
position
|
position
|
||||||
endpoints
|
endpoints
|
||||||
|
|
||||||
GNS3 developements
|
GNS3 development
|
||||||
------------------
|
----------------
|
||||||
.. toctree::
|
.. toctree::
|
||||||
development
|
development
|
||||||
file_format
|
file_format
|
||||||
|
@ -1,17 +1,17 @@
|
|||||||
Notifications
|
Notifications
|
||||||
=============
|
=============
|
||||||
|
|
||||||
You can receive notification from the controller allowing you to update your local data.
|
Notifications can be received from the controller, they can be used to update your local data.
|
||||||
|
|
||||||
Notifications endpoints
|
Notification endpoints
|
||||||
***********************
|
**********************
|
||||||
|
|
||||||
You can listen the HTTP stream /notifications or the websocket.
|
Listen to the HTTP stream endpoint or to the Websocket endpoint.
|
||||||
|
|
||||||
* :doc:`api/v2/controller/project/projectsprojectidnotifications`
|
* :doc:`api/v2/controller/project/projectsprojectidnotifications`
|
||||||
* :doc:`api/v2/controller/project/projectsprojectidnotificationsws`
|
* :doc:`api/v2/controller/project/projectsprojectidnotificationsws`
|
||||||
|
|
||||||
We recommend using the websocket.
|
It is recommended to use the Websocket endpoint.
|
||||||
|
|
||||||
Available notifications
|
Available notifications
|
||||||
***********************
|
***********************
|
||||||
@ -21,7 +21,7 @@ Available notifications
|
|||||||
|
|
||||||
ping
|
ping
|
||||||
----
|
----
|
||||||
Keep the connection between client and controller.
|
Keep-alive between client and controller. Also used to receive the current CPU and memory usage.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/ping.json
|
.. literalinclude:: api/notifications/ping.json
|
||||||
|
|
||||||
@ -29,7 +29,7 @@ Keep the connection between client and controller.
|
|||||||
compute.created
|
compute.created
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
Compute has been created.
|
A compute has been created.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/compute.created.json
|
.. literalinclude:: api/notifications/compute.created.json
|
||||||
|
|
||||||
@ -37,9 +37,7 @@ Compute has been created.
|
|||||||
compute.updated
|
compute.updated
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
Compute has been updated. You will receive a lot of this
|
A compute has been updated.
|
||||||
event because it's include change of CPU and memory usage
|
|
||||||
on the compute node.
|
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/compute.updated.json
|
.. literalinclude:: api/notifications/compute.updated.json
|
||||||
|
|
||||||
@ -47,7 +45,7 @@ on the compute node.
|
|||||||
compute.deleted
|
compute.deleted
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Compute has been deleted.
|
A compute has been deleted.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/compute.deleted.json
|
.. literalinclude:: api/notifications/compute.deleted.json
|
||||||
|
|
||||||
@ -55,7 +53,7 @@ Compute has been deleted.
|
|||||||
node.created
|
node.created
|
||||||
------------
|
------------
|
||||||
|
|
||||||
Node has been created.
|
A node has been created.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/node.created.json
|
.. literalinclude:: api/notifications/node.created.json
|
||||||
|
|
||||||
@ -63,7 +61,7 @@ Node has been created.
|
|||||||
node.updated
|
node.updated
|
||||||
------------
|
------------
|
||||||
|
|
||||||
Node has been updated.
|
A node has been updated.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/node.updated.json
|
.. literalinclude:: api/notifications/node.updated.json
|
||||||
|
|
||||||
@ -71,7 +69,7 @@ Node has been updated.
|
|||||||
node.deleted
|
node.deleted
|
||||||
------------
|
------------
|
||||||
|
|
||||||
Node has been deleted.
|
A node has been deleted.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/node.deleted.json
|
.. literalinclude:: api/notifications/node.deleted.json
|
||||||
|
|
||||||
@ -79,8 +77,8 @@ Node has been deleted.
|
|||||||
link.created
|
link.created
|
||||||
------------
|
------------
|
||||||
|
|
||||||
Link has been created. Note that a link when created
|
A link has been created. Note that a link is not connected
|
||||||
is not yet connected to both part.
|
to any node when it is created.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/link.created.json
|
.. literalinclude:: api/notifications/link.created.json
|
||||||
|
|
||||||
@ -88,7 +86,7 @@ is not yet connected to both part.
|
|||||||
link.updated
|
link.updated
|
||||||
------------
|
------------
|
||||||
|
|
||||||
Link has been updated.
|
A link has been updated.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/link.updated.json
|
.. literalinclude:: api/notifications/link.updated.json
|
||||||
|
|
||||||
@ -96,7 +94,7 @@ Link has been updated.
|
|||||||
link.deleted
|
link.deleted
|
||||||
------------
|
------------
|
||||||
|
|
||||||
Link has been deleted.
|
A link has been deleted.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/link.deleted.json
|
.. literalinclude:: api/notifications/link.deleted.json
|
||||||
|
|
||||||
@ -104,7 +102,7 @@ Link has been deleted.
|
|||||||
drawing.created
|
drawing.created
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Drawing has been created.
|
A drawing has been created.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/drawing.created.json
|
.. literalinclude:: api/notifications/drawing.created.json
|
||||||
|
|
||||||
@ -112,8 +110,8 @@ Drawing has been created.
|
|||||||
drawing.updated
|
drawing.updated
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Drawing has been updated. To reduce data transfert if the
|
A drawing has been updated. The svg field is only included if it
|
||||||
svg field has not change the field is not included.
|
has changed in order to reduce data transfer.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/drawing.updated.json
|
.. literalinclude:: api/notifications/drawing.updated.json
|
||||||
|
|
||||||
@ -121,7 +119,7 @@ svg field has not change the field is not included.
|
|||||||
drawing.deleted
|
drawing.deleted
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Drawing has been deleted.
|
A drawing has been deleted.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/drawing.deleted.json
|
.. literalinclude:: api/notifications/drawing.deleted.json
|
||||||
|
|
||||||
@ -129,7 +127,7 @@ Drawing has been deleted.
|
|||||||
project.updated
|
project.updated
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Project has been updated.
|
A project has been updated.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/project.updated.json
|
.. literalinclude:: api/notifications/project.updated.json
|
||||||
|
|
||||||
@ -137,7 +135,7 @@ Project has been updated.
|
|||||||
project.closed
|
project.closed
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Project has been closed.
|
A project has been closed.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/project.closed.json
|
.. literalinclude:: api/notifications/project.closed.json
|
||||||
|
|
||||||
@ -145,14 +143,14 @@ Project has been closed.
|
|||||||
snapshot.restored
|
snapshot.restored
|
||||||
--------------------------
|
--------------------------
|
||||||
|
|
||||||
Snapshot has been restored
|
A snapshot has been restored
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/project.snapshot_restored.json
|
.. literalinclude:: api/notifications/project.snapshot_restored.json
|
||||||
|
|
||||||
log.error
|
log.error
|
||||||
---------
|
---------
|
||||||
|
|
||||||
Send an error to the user
|
Sends an error
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/log.error.json
|
.. literalinclude:: api/notifications/log.error.json
|
||||||
|
|
||||||
@ -160,7 +158,7 @@ Send an error to the user
|
|||||||
log.warning
|
log.warning
|
||||||
------------
|
------------
|
||||||
|
|
||||||
Send a warning to the user
|
Sends a warning
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/log.warning.json
|
.. literalinclude:: api/notifications/log.warning.json
|
||||||
|
|
||||||
@ -168,7 +166,7 @@ Send a warning to the user
|
|||||||
log.info
|
log.info
|
||||||
---------
|
---------
|
||||||
|
|
||||||
Send an information to the user
|
Sends an information
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/log.info.json
|
.. literalinclude:: api/notifications/log.info.json
|
||||||
|
|
||||||
@ -176,8 +174,6 @@ Send an information to the user
|
|||||||
settings.updated
|
settings.updated
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
GUI settings updated. Will be removed in a later release.
|
GUI settings have been updated. Will be removed in a later release.
|
||||||
|
|
||||||
.. literalinclude:: api/notifications/settings.updated.json
|
.. literalinclude:: api/notifications/settings.updated.json
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Positions
|
Positions
|
||||||
=========
|
=========
|
||||||
|
|
||||||
In a the project object you have properties scene_height and scene_width this define the
|
A project object contains the scene_height and scene_width properties. This defines the
|
||||||
size of the drawing area as px.
|
size of the drawing area in px.
|
||||||
|
|
||||||
The position of the node are relative to this with 0,0 as center of the area.
|
The position of the nodes are relative with 0,0 as center of the area.
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"product_url": "https://www.centos.org/download/",
|
"product_url": "https://www.centos.org/download/",
|
||||||
"registry_version": 5,
|
"registry_version": 5,
|
||||||
"status": "stable",
|
"status": "stable",
|
||||||
|
"availability": "free",
|
||||||
"maintainer": "GNS3 Team",
|
"maintainer": "GNS3 Team",
|
||||||
"maintainer_email": "developers@gns3.net",
|
"maintainer_email": "developers@gns3.net",
|
||||||
"usage": "Username: osboxes.org\nPassword: osboxes.org",
|
"usage": "Username: osboxes.org\nPassword: osboxes.org",
|
||||||
|
@ -6,15 +6,16 @@
|
|||||||
"vendor_url": "https://www.checkpoint.com",
|
"vendor_url": "https://www.checkpoint.com",
|
||||||
"documentation_url": "http://downloads.checkpoint.com/dc/download.htm?ID=26770",
|
"documentation_url": "http://downloads.checkpoint.com/dc/download.htm?ID=26770",
|
||||||
"product_name": "Gaia",
|
"product_name": "Gaia",
|
||||||
"registry_version": 3,
|
"registry_version": 4,
|
||||||
"status": "experimental",
|
"status": "experimental",
|
||||||
"maintainer": "GNS3 Team",
|
"maintainer": "GNS3 Team",
|
||||||
"maintainer_email": "developers@gns3.net",
|
"maintainer_email": "developers@gns3.net",
|
||||||
"usage": "At boot choose the install on disk options. You need to open quickly the terminal after launching the appliance if you want to see the menu. You need a web browser in order to finalize the installation. You can use the firefox appliance for this.",
|
"usage": "At boot choose the install on disk options. You need to open quickly the terminal after launching the appliance if you want to see the menu. You need a web browser in order to finalize the installation. You can use the firefox appliance for this.",
|
||||||
"qemu": {
|
"qemu": {
|
||||||
|
"cpus": 2,
|
||||||
"adapter_type": "e1000",
|
"adapter_type": "e1000",
|
||||||
"adapters": 8,
|
"adapters": 8,
|
||||||
"ram": 2048,
|
"ram": 4096,
|
||||||
"arch": "x86_64",
|
"arch": "x86_64",
|
||||||
"console_type": "telnet",
|
"console_type": "telnet",
|
||||||
"boot_priority": "dc",
|
"boot_priority": "dc",
|
||||||
@ -44,33 +45,33 @@
|
|||||||
"download_url": "https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk104859"
|
"download_url": "https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk104859"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"filename": "empty8G.qcow2",
|
"filename": "empty100G.qcow2",
|
||||||
"version": "1.0",
|
"version": "1.0",
|
||||||
"md5sum": "f1d2c25b6990f99bd05b433ab603bdb4",
|
"md5sum": "1e6409a4523ada212dea2ebc50e50a65",
|
||||||
"filesize": 197120,
|
"filesize": 197120,
|
||||||
"download_url": "https://sourceforge.net/projects/gns-3/files/Empty%20Qemu%20disk/",
|
"download_url": "https://sourceforge.net/projects/gns-3/files/Empty%20Qemu%20disk/",
|
||||||
"direct_download_url": "https://sourceforge.net/projects/gns-3/files/Empty%20Qemu%20disk/empty8G.qcow2/download"
|
"direct_download_url": "https://sourceforge.net/projects/gns-3/files/Empty%20Qemu%20disk/empty100G.qcow2/download"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
{
|
{
|
||||||
"name": "80.10",
|
"name": "80.10",
|
||||||
"images": {
|
"images": {
|
||||||
"hda_disk_image": "empty8G.qcow2",
|
"hda_disk_image": "empty100G.qcow2",
|
||||||
"cdrom_image": "Check_Point_R80.10_T421_Gaia.iso"
|
"cdrom_image": "Check_Point_R80.10_T421_Gaia.iso"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "77.30",
|
"name": "77.30",
|
||||||
"images": {
|
"images": {
|
||||||
"hda_disk_image": "empty8G.qcow2",
|
"hda_disk_image": "empty100G.qcow2",
|
||||||
"cdrom_image": "Check_Point_R77.30_T204_Install_and_Upgrade.Gaia.iso"
|
"cdrom_image": "Check_Point_R77.30_T204_Install_and_Upgrade.Gaia.iso"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "77.20",
|
"name": "77.20",
|
||||||
"images": {
|
"images": {
|
||||||
"hda_disk_image": "empty8G.qcow2",
|
"hda_disk_image": "empty100G.qcow2",
|
||||||
"cdrom_image": "Check_Point_R77.20_T124_Install.Gaia.iso"
|
"cdrom_image": "Check_Point_R77.20_T124_Install.Gaia.iso"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -25,23 +25,30 @@
|
|||||||
{
|
{
|
||||||
"filename": "csr1000v-universalk9.16.07.01-serial.qcow2",
|
"filename": "csr1000v-universalk9.16.07.01-serial.qcow2",
|
||||||
"version": "16.7.1",
|
"version": "16.7.1",
|
||||||
"md5sum": "13adbfc2586d06c9802b9805168c0c44",
|
"md5sum": "bad9000d4ae8317bbc99a34a8cdd2eb4",
|
||||||
"filesize": 882769920,
|
"filesize": 884539392,
|
||||||
"download_url": "https://software.cisco.com/download/release.html?mdfid=284364978&flowid=39582&softwareid=282046477&release=Fuji-16.7.1&relind=AVAILABLE&rellifecycle=ED&reltype=latest"
|
"download_url": "https://software.cisco.com/download/release.html?mdfid=284364978&flowid=39582&softwareid=282046477&release=Fuji-16.7.1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"filename": "csr1000v-universalk9.16.06.02-serial.qcow2",
|
||||||
|
"version": "16.6.2",
|
||||||
|
"md5sum": "11e393b31ab9d1ace8e5f7551c491ba2",
|
||||||
|
"filesize": 1570242560,
|
||||||
|
"download_url": "https://software.cisco.com/download/release.html?mdfid=284364978&flowid=39582&softwareid=282046477&release=Everest-16.6.2"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"filename": "csr1000v-universalk9.16.06.01-serial.qcow2",
|
"filename": "csr1000v-universalk9.16.06.01-serial.qcow2",
|
||||||
"version": "16.6.1",
|
"version": "16.6.1",
|
||||||
"md5sum": "909e74446d3ff0b82c14327c0058fdc2",
|
"md5sum": "909e74446d3ff0b82c14327c0058fdc2",
|
||||||
"filesize": 1566179328,
|
"filesize": 1566179328,
|
||||||
"download_url": "https://software.cisco.com/download/release.html?mdfid=284364978&flowid=39582&softwareid=282046477&release=Denali-16.3.5&relind=AVAILABLE&rellifecycle=ED&reltype=latest"
|
"download_url": "https://software.cisco.com/download/release.html?mdfid=284364978&flowid=39582&softwareid=282046477&release=Everest-16.6.1"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"filename": "csr1000v-universalk9.16.05.02-serial.qcow2",
|
"filename": "csr1000v-universalk9.16.05.02-serial.qcow2",
|
||||||
"version": "16.5.2",
|
"version": "16.5.2",
|
||||||
"md5sum": "59a84da28d59ee75176aa05ecde7f72a",
|
"md5sum": "59a84da28d59ee75176aa05ecde7f72a",
|
||||||
"filesize": 1322385408,
|
"filesize": 1322385408,
|
||||||
"download_url": "https://software.cisco.com/download/release.html?mdfid=284364978&flowid=39582&softwareid=282046477&release=Denali-16.3.5&relind=AVAILABLE&rellifecycle=ED&reltype=latest"
|
"download_url": "https://software.cisco.com/download/release.html?mdfid=284364978&flowid=39582&softwareid=282046477&release=Everest-16.5.2"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"filename": "csr1000v-universalk9.16.5.1b-serial.qcow2",
|
"filename": "csr1000v-universalk9.16.5.1b-serial.qcow2",
|
||||||
@ -93,6 +100,12 @@
|
|||||||
"hda_disk_image": "csr1000v-universalk9.16.07.01-serial.qcow2"
|
"hda_disk_image": "csr1000v-universalk9.16.07.01-serial.qcow2"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "16.6.2",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "csr1000v-universalk9.16.06.02-serial.qcow2"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "16.6.1",
|
"name": "16.6.1",
|
||||||
"images": {
|
"images": {
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"product_url": "http://www.cisco.com/c/en/us/td/docs/security/firepower/quick_start/kvm/fmcv-kvm-qsg.html",
|
"product_url": "http://www.cisco.com/c/en/us/td/docs/security/firepower/quick_start/kvm/fmcv-kvm-qsg.html",
|
||||||
"registry_version": 4,
|
"registry_version": 4,
|
||||||
"status": "experimental",
|
"status": "experimental",
|
||||||
|
"availability": "service-contract",
|
||||||
"maintainer": "Community",
|
"maintainer": "Community",
|
||||||
"maintainer_email":"",
|
"maintainer_email":"",
|
||||||
"usage": "BE PATIENT\nOn first boot FMCv generates about 6GB of data. This can take 30 minutes or more. Plan on a long wait after the following line in the boot up:\n\n usbcore: registered new interface driver usb-storage\n\nInitial IP address: 192.168.45.45.\n\nDefault username/password: admin/Admin123.",
|
"usage": "BE PATIENT\nOn first boot FMCv generates about 6GB of data. This can take 30 minutes or more. Plan on a long wait after the following line in the boot up:\n\n usbcore: registered new interface driver usb-storage\n\nInitial IP address: 192.168.45.45.\n\nDefault username/password: admin/Admin123.",
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"product_url": "http://www.cisco.com/c/en/us/td/docs/security/firepower/quick_start/kvm/ftdv-kvm-qsg.html",
|
"product_url": "http://www.cisco.com/c/en/us/td/docs/security/firepower/quick_start/kvm/ftdv-kvm-qsg.html",
|
||||||
"registry_version": 4,
|
"registry_version": 4,
|
||||||
"status": "experimental",
|
"status": "experimental",
|
||||||
|
"availability": "service-contract",
|
||||||
"maintainer": "Community",
|
"maintainer": "Community",
|
||||||
"maintainer_email": "",
|
"maintainer_email": "",
|
||||||
"usage": "Default username/password: admin/Admin123.",
|
"usage": "Default username/password: admin/Admin123.",
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"product_url": "http://www.cisco.com/c/en/us/support/security/ngips-virtual-appliance/tsd-products-support-series-home.html",
|
"product_url": "http://www.cisco.com/c/en/us/support/security/ngips-virtual-appliance/tsd-products-support-series-home.html",
|
||||||
"registry_version": 4,
|
"registry_version": 4,
|
||||||
"status": "experimental",
|
"status": "experimental",
|
||||||
|
"availability": "service-contract",
|
||||||
"maintainer": "Community",
|
"maintainer": "Community",
|
||||||
"maintainer_email": "",
|
"maintainer_email": "",
|
||||||
"usage": "Default username/password: admin/Admin123.",
|
"usage": "Default username/password: admin/Admin123.",
|
||||||
|
@ -22,6 +22,13 @@
|
|||||||
"kvm": "require"
|
"kvm": "require"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "ClearOS-7.4-DVD-x86_64.iso",
|
||||||
|
"version": "7.4",
|
||||||
|
"md5sum": "826da592f9cd4b59f5fc996ff2d569f1",
|
||||||
|
"filesize": 1029701632,
|
||||||
|
"download_url": "https://www.clearos.com/clearfoundation/software/clearos-downloads"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "ClearOS-7.3-DVD-x86_64.iso",
|
"filename": "ClearOS-7.3-DVD-x86_64.iso",
|
||||||
"version": "7.3",
|
"version": "7.3",
|
||||||
@ -46,6 +53,13 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "7.4",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "empty30G.qcow2",
|
||||||
|
"cdrom_image": "ClearOS-7.4-DVD-x86_64.iso"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "7.3",
|
"name": "7.3",
|
||||||
"images": {
|
"images": {
|
||||||
|
@ -21,6 +21,15 @@
|
|||||||
"kvm": "allow"
|
"kvm": "allow"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "coreos_production_qemu_image.1632.2.1.img",
|
||||||
|
"version": "1632.2.1",
|
||||||
|
"md5sum": "facd05ca85eb87e2dc6aefd6779f6806",
|
||||||
|
"filesize": 885719040,
|
||||||
|
"download_url": "http://stable.release.core-os.net/amd64-usr/1632.2.1/",
|
||||||
|
"direct_download_url": "http://stable.release.core-os.net/amd64-usr/1632.2.1/coreos_production_qemu_image.img.bz2",
|
||||||
|
"compression": "bzip2"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "coreos_production_qemu_image.1576.4.0.img",
|
"filename": "coreos_production_qemu_image.1576.4.0.img",
|
||||||
"version": "1576.4.0",
|
"version": "1576.4.0",
|
||||||
@ -149,6 +158,12 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "1632.2.1",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "coreos_production_qemu_image.1576.4.0.img"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "1576.4.0",
|
"name": "1576.4.0",
|
||||||
"images": {
|
"images": {
|
||||||
|
@ -23,6 +23,14 @@
|
|||||||
"kvm": "require"
|
"kvm": "require"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "cumulus-linux-3.5.2-vx-amd64.qcow2",
|
||||||
|
"version": "3.5.2",
|
||||||
|
"md5sum": "87d1d8b297e5ebd77924669dfb7e4c9f",
|
||||||
|
"filesize": 996605952,
|
||||||
|
"download_url": "https://cumulusnetworks.com/cumulus-vx/download/",
|
||||||
|
"direct_download_url": "http://cumulusfiles.s3.amazonaws.com/cumulus-linux-3.5.0-vx-amd64.qcow2"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "cumulus-linux-3.5.0-vx-amd64.qcow2",
|
"filename": "cumulus-linux-3.5.0-vx-amd64.qcow2",
|
||||||
"version": "3.5.0",
|
"version": "3.5.0",
|
||||||
@ -133,6 +141,12 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "3.5.2",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "cumulus-linux-3.5.2-vx-amd64.qcow2"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "3.5.0",
|
"name": "3.5.0",
|
||||||
"images": {
|
"images": {
|
||||||
|
@ -27,6 +27,13 @@
|
|||||||
"options": "-smp 2 -cpu host"
|
"options": "-smp 2 -cpu host"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "BIGIP-13.1.0.2.0.0.6.qcow2",
|
||||||
|
"version": "13.1.0 HF2",
|
||||||
|
"md5sum": "d29eb861d8906fc36f88d9861a0055f4",
|
||||||
|
"filesize": 4363649024,
|
||||||
|
"download_url": "https://downloads.f5.com/esd/serveDownload.jsp?path=/big-ip/big-ip_v13.x/13.1.0/english/13.1.0.2_virtual-edition/&sw=BIG-IP&pro=big-ip_v13.x&ver=13.1.0&container=13.1.0.2_Virtual-Edition&file=BIGIP-13.1.0.2.0.0.6.ALL.qcow2.zip"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "BIGIP-13.1.0.1.0.0.8.qcow2",
|
"filename": "BIGIP-13.1.0.1.0.0.8.qcow2",
|
||||||
"version": "13.1.0 HF1",
|
"version": "13.1.0 HF1",
|
||||||
@ -114,6 +121,13 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "13.1.0 HF2",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "BIGIP-13.1.0.2.0.0.6.qcow2",
|
||||||
|
"hdb_disk_image": "empty100G.qcow2"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "13.1.0 HF1",
|
"name": "13.1.0 HF1",
|
||||||
"images": {
|
"images": {
|
||||||
|
@ -34,6 +34,13 @@
|
|||||||
"filesize": 30998528,
|
"filesize": 30998528,
|
||||||
"download_url": "https://support.fortinet.com/Download/FirmwareImages.aspx"
|
"download_url": "https://support.fortinet.com/Download/FirmwareImages.aspx"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"filename": "FAD_KVM-V400-build0989-FORTINET.out.kvm-boot.qcow2",
|
||||||
|
"version": "4.8.4",
|
||||||
|
"md5sum": "c1926d5979ef24d9d14d3394c0bb832b",
|
||||||
|
"filesize": 72810496,
|
||||||
|
"download_url": "https://support.fortinet.com/Download/FirmwareImages.aspx"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "FAD_KVM-V400-build0983-FORTINET.out.kvm-boot.qcow2",
|
"filename": "FAD_KVM-V400-build0983-FORTINET.out.kvm-boot.qcow2",
|
||||||
"version": "4.8.3",
|
"version": "4.8.3",
|
||||||
@ -148,6 +155,13 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "4.8.4",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "FAD_KVM-V400-build0989-FORTINET.out.kvm-boot.qcow2",
|
||||||
|
"hdb_disk_image": "FAD_KVM-v400-FORTINET.out.kvm-data.qcow2"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "4.8.3",
|
"name": "4.8.3",
|
||||||
"images": {
|
"images": {
|
||||||
|
@ -54,6 +54,13 @@
|
|||||||
"filesize": 38760448,
|
"filesize": 38760448,
|
||||||
"download_url": "https://support.fortinet.com/Download/FirmwareImages.aspx"
|
"download_url": "https://support.fortinet.com/Download/FirmwareImages.aspx"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"filename": "FGT_VM64_KVM-v5-build1183-FORTINET.out.kvm.qcow2",
|
||||||
|
"version": "5.4.8",
|
||||||
|
"md5sum": "c1eb02996a0919c934785d5f48df9507",
|
||||||
|
"filesize": 38608896,
|
||||||
|
"download_url": "https://support.fortinet.com/Download/FirmwareImages.aspx"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "FGT_VM64_KVM-v5-build6446-FORTINET.out.kvm.qcow2",
|
"filename": "FGT_VM64_KVM-v5-build6446-FORTINET.out.kvm.qcow2",
|
||||||
"version": "5.4.7",
|
"version": "5.4.7",
|
||||||
@ -204,6 +211,13 @@
|
|||||||
"hdb_disk_image": "empty30G.qcow2"
|
"hdb_disk_image": "empty30G.qcow2"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "5.4.8",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "FGT_VM64_KVM-v5-build1183-FORTINET.out.kvm.qcow2",
|
||||||
|
"hdb_disk_image": "empty30G.qcow2"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "5.4.7",
|
"name": "5.4.7",
|
||||||
"images": {
|
"images": {
|
||||||
|
@ -27,6 +27,13 @@
|
|||||||
"options": "-smp 2"
|
"options": "-smp 2"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "FSA_KVM-v200-build0329-FORTINET.out.kvm.qcow2",
|
||||||
|
"version": "2.5.1",
|
||||||
|
"md5sum": "782ba56a644d78da59b89f4ac91bd319",
|
||||||
|
"filesize": 114491904,
|
||||||
|
"download_url": "https://support.fortinet.com/Download/FirmwareImages.aspx"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "FSA_KVM-v200-build0261-FORTINET.out.kvm.qcow2",
|
"filename": "FSA_KVM-v200-build0261-FORTINET.out.kvm.qcow2",
|
||||||
"version": "2.4.1",
|
"version": "2.4.1",
|
||||||
@ -71,6 +78,13 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "2.5.1",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "FSA_KVM-v200-build0329-FORTINET.out.kvm.qcow2",
|
||||||
|
"hdb_disk_image": "FSA_v200-datadrive.qcow2"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "2.4.1",
|
"name": "2.4.1",
|
||||||
"images": {
|
"images": {
|
||||||
|
@ -24,6 +24,14 @@
|
|||||||
"kvm": "require"
|
"kvm": "require"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "FreeNAS-11.1-U1.iso",
|
||||||
|
"version": "11.1 U1",
|
||||||
|
"md5sum": "ccbd9990a5878d35c6bc0cc6eea34b16",
|
||||||
|
"filesize": 626601984,
|
||||||
|
"download_url": "http://www.freenas.org/download/",
|
||||||
|
"direct_download_url": "http://download.freenas.org/11/11.1-RELEASE/x64/FreeNAS-11.1-RELEASE.iso"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "FreeNAS-11.1-RELEASE.iso",
|
"filename": "FreeNAS-11.1-RELEASE.iso",
|
||||||
"version": "11.1",
|
"version": "11.1",
|
||||||
@ -34,7 +42,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"filename": "FreeNAS-11.0-U4.iso",
|
"filename": "FreeNAS-11.0-U4.iso",
|
||||||
"version": "11.0-U4",
|
"version": "11.0 U4",
|
||||||
"md5sum": "4c210f1a6510d1fa95257d81ef569ff8",
|
"md5sum": "4c210f1a6510d1fa95257d81ef569ff8",
|
||||||
"filesize": 567312384,
|
"filesize": 567312384,
|
||||||
"download_url": "http://www.freenas.org/download/",
|
"download_url": "http://www.freenas.org/download/",
|
||||||
@ -42,7 +50,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"filename": "FreeNAS-9.10.1-U4.iso",
|
"filename": "FreeNAS-9.10.1-U4.iso",
|
||||||
"version": "9.10",
|
"version": "9.10 U4",
|
||||||
"md5sum": "b4fb14513dcbb4eb4c5596c5911ca9cc",
|
"md5sum": "b4fb14513dcbb4eb4c5596c5911ca9cc",
|
||||||
"filesize": 533098496,
|
"filesize": 533098496,
|
||||||
"download_url": "http://www.freenas.org/download/",
|
"download_url": "http://www.freenas.org/download/",
|
||||||
@ -58,6 +66,14 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "11.1 U1",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "empty30G.qcow2",
|
||||||
|
"hdb_disk_image": "empty30G.qcow2",
|
||||||
|
"cdrom_image": "FreeNAS-11.1-U1.iso"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "11.1",
|
"name": "11.1",
|
||||||
"images": {
|
"images": {
|
||||||
@ -67,7 +83,7 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "11.0",
|
"name": "11.0 U4",
|
||||||
"images": {
|
"images": {
|
||||||
"hda_disk_image": "empty30G.qcow2",
|
"hda_disk_image": "empty30G.qcow2",
|
||||||
"hdb_disk_image": "empty30G.qcow2",
|
"hdb_disk_image": "empty30G.qcow2",
|
||||||
@ -75,7 +91,7 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "9.10",
|
"name": "9.10 U4",
|
||||||
"images": {
|
"images": {
|
||||||
"hda_disk_image": "empty30G.qcow2",
|
"hda_disk_image": "empty30G.qcow2",
|
||||||
"hdb_disk_image": "empty30G.qcow2",
|
"hdb_disk_image": "empty30G.qcow2",
|
||||||
|
@ -24,6 +24,15 @@
|
|||||||
"kvm": "allow"
|
"kvm": "allow"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "ipfire-2.19.1gb-ext4-scon.x86_64-full-core117.img",
|
||||||
|
"version": "2.19.117",
|
||||||
|
"md5sum": "657673d88b94ed7d22332aebe817bc86",
|
||||||
|
"filesize": 1063256064,
|
||||||
|
"download_url": "http://www.ipfire.org/download",
|
||||||
|
"direct_download_url": "https://downloads.ipfire.org/releases/ipfire-2.x/2.19-core117/ipfire-2.19.1gb-ext4-scon.x86_64-full-core117.img.gz",
|
||||||
|
"compression": "gzip"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "ipfire-2.19.1gb-ext4-scon.x86_64-full-core116.img",
|
"filename": "ipfire-2.19.1gb-ext4-scon.x86_64-full-core116.img",
|
||||||
"version": "2.19.116",
|
"version": "2.19.116",
|
||||||
@ -53,6 +62,12 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "2.19.117",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "ipfire-2.19.1gb-ext4-scon.x86_64-full-core117.img"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "2.19.116",
|
"name": "2.19.116",
|
||||||
"images": {
|
"images": {
|
||||||
|
43
gns3server/appliances/juniper-junos-space.gns3a
Normal file
43
gns3server/appliances/juniper-junos-space.gns3a
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
{
|
||||||
|
"name": "Junos Space",
|
||||||
|
"category": "guest",
|
||||||
|
"description": "Junos Space Network Management Platform works with Juniper's management applications to simplify and automate management of Juniper's switching, routing, and security devices. As part of a complete solution, the platform provides broad fault, configuration, accounting, performance, and security management (FCAPS) capability, same day support for new devices and Junos OS releases, a task-specific user interface, and northbound APIs for integration with existing network management systems (NMS) or operations/business support systems (OSS/BSS).\n\nThe platform helps network operators at enterprises and service providers scale operations, reduce complexity, and enable new applications and services to be brought to market quickly, through multilayered network abstractions, operator-centric automation schemes, and a simple point-and-click UI.",
|
||||||
|
"vendor_name": "Juniper",
|
||||||
|
"vendor_url": "https://www.juniper.net/us/en/",
|
||||||
|
"documentation_url": "http://www.juniper.net/techpubs/",
|
||||||
|
"product_name": "Junos Space",
|
||||||
|
"product_url": "https://www.juniper.net/us/en/dm/free-vqfx-trial/",
|
||||||
|
"registry_version": 3,
|
||||||
|
"status": "stable",
|
||||||
|
"maintainer": "GNS3 Team",
|
||||||
|
"maintainer_email": "developers@gns3.net",
|
||||||
|
"symbol": "juniper-vqfx.svg",
|
||||||
|
"usage": "16 GB RAM is the bare minimum; you should use 32/64 GB in production deplyments.\nDefault credentials:\n- CLI: admin / abc123\n- WebUI: super / juniper123",
|
||||||
|
"port_name_format": "em{0}",
|
||||||
|
"qemu": {
|
||||||
|
"adapter_type": "e1000",
|
||||||
|
"adapters": 4,
|
||||||
|
"ram": 16384,
|
||||||
|
"arch": "x86_64",
|
||||||
|
"console_type": "telnet",
|
||||||
|
"kvm": "require",
|
||||||
|
"options": "-smp 4 -nographic"
|
||||||
|
},
|
||||||
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "space-17.2R1.4.qcow2",
|
||||||
|
"version": "17.2R1.4",
|
||||||
|
"md5sum": "4124fa756c3a78be0619e876b8ee687e",
|
||||||
|
"filesize": 5150474240,
|
||||||
|
"download_url": "https://www.juniper.net/support/downloads/?p=space#sw"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "17.2R1.4",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "space-17.2R1.4.qcow2"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
@ -9,6 +9,7 @@
|
|||||||
"product_url": "https://www.opensuse.org/#Leap",
|
"product_url": "https://www.opensuse.org/#Leap",
|
||||||
"registry_version": 4,
|
"registry_version": 4,
|
||||||
"status": "stable",
|
"status": "stable",
|
||||||
|
"availability": "free",
|
||||||
"maintainer": "GNS3 Team",
|
"maintainer": "GNS3 Team",
|
||||||
"maintainer_email": "developers@gns3.net",
|
"maintainer_email": "developers@gns3.net",
|
||||||
"usage": "Username: osboxes\nPassword: osboxes.org\n\nroot password: osboxes.org",
|
"usage": "Username: osboxes\nPassword: osboxes.org\n\nroot password: osboxes.org",
|
||||||
|
@ -22,6 +22,15 @@
|
|||||||
"kvm": "require"
|
"kvm": "require"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "PacketFenceZEN_USB-7.4.0.img",
|
||||||
|
"version": "7.4.0",
|
||||||
|
"md5sum": "83951211540f16dd5813c26955c52429",
|
||||||
|
"filesize": 3221225472,
|
||||||
|
"download_url": "https://packetfence.org/download.html#/zen",
|
||||||
|
"direct_download_url": "https://sourceforge.net/projects/packetfence/files/PacketFence%20ZEN/7.4.0/PacketFenceZEN_USB-7.4.0.tar.bz2/download",
|
||||||
|
"compression": "bzip2"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "PacketFenceZEN_USB-7.3.0.img",
|
"filename": "PacketFenceZEN_USB-7.3.0.img",
|
||||||
"version": "7.3.0",
|
"version": "7.3.0",
|
||||||
@ -96,6 +105,12 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "7.4.0",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "PacketFenceZEN_USB-7.4.0.img"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "7.3.0",
|
"name": "7.3.0",
|
||||||
"images": {
|
"images": {
|
||||||
|
94
gns3server/appliances/ubuntu-cloud.gns3a
Normal file
94
gns3server/appliances/ubuntu-cloud.gns3a
Normal file
@ -0,0 +1,94 @@
|
|||||||
|
{
|
||||||
|
"name": "Ubuntu Cloud Guest",
|
||||||
|
"category": "guest",
|
||||||
|
"description": "The term 'Ubuntu Cloud Guest' refers to the Official Ubuntu images that are available at http://cloud-images.ubuntu.com . These images are built by Canonical. They are then registered on EC2, and compressed tarfiles are made also available for download. For using those images on a public cloud such as Amazon EC2, you simply choose an image and launch it. To use those images on a private cloud, or to run the image on a local hypervisor (such as KVM) you would need to download those images and either publish them to your private cloud, or launch them directly on a hypervisor. The following sections explain in more details how to perform each of those actions",
|
||||||
|
"vendor_name": "Canonical Inc.",
|
||||||
|
"vendor_url": "https://www.ubuntu.com",
|
||||||
|
"documentation_url": "https://help.ubuntu.com/community/UEC/Images",
|
||||||
|
"product_name": "Ubuntu Cloud Guest",
|
||||||
|
"product_url": "https://www.ubuntu.com/cloud",
|
||||||
|
"registry_version": 3,
|
||||||
|
"status": "stable",
|
||||||
|
"maintainer": "GNS3 Team",
|
||||||
|
"maintainer_email": "developers@gns3.net",
|
||||||
|
"usage": "Username: ubuntu\nPassword: ubuntu",
|
||||||
|
"port_name_format": "Ethernet{0}",
|
||||||
|
"qemu": {
|
||||||
|
"adapter_type": "virtio-net-pci",
|
||||||
|
"adapters": 1,
|
||||||
|
"ram": 1024,
|
||||||
|
"hda_disk_interface": "virtio",
|
||||||
|
"arch": "x86_64",
|
||||||
|
"console_type": "telnet",
|
||||||
|
"boot_priority": "c",
|
||||||
|
"kvm": "require",
|
||||||
|
"options": "-nographic"
|
||||||
|
},
|
||||||
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "ubuntu-17.10-server-cloudimg-amd64.img",
|
||||||
|
"version": "17.10",
|
||||||
|
"md5sum": "5d221878d8b2e49c5de7ebb58a2b35e3",
|
||||||
|
"filesize": 318373888,
|
||||||
|
"download_url": "https://cloud-images.ubuntu.com/releases/17.10/release/"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"filename": "ubuntu-17.04-server-cloudimg-amd64.img",
|
||||||
|
"version": "17.04",
|
||||||
|
"md5sum": "d4da8157dbf2e64f2fa1fb5d121398e5",
|
||||||
|
"filesize": 351993856,
|
||||||
|
"download_url": "https://cloud-images.ubuntu.com/releases/17.04/release/"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"filename": "ubuntu-16.04-server-cloudimg-amd64-disk1.img",
|
||||||
|
"version": "16.04.3",
|
||||||
|
"md5sum": "bd0c168a83b1f483bd240b3d874edd6c",
|
||||||
|
"filesize": 288686080,
|
||||||
|
"download_url": "https://cloud-images.ubuntu.com/releases/16.04/release/"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"filename": "ubuntu-14.04-server-cloudimg-amd64-disk1.img",
|
||||||
|
"version": "14.04.5",
|
||||||
|
"md5sum": "d7b4112c7d797e5e77ef9995d06a76f1",
|
||||||
|
"filesize": 262406656,
|
||||||
|
"download_url": "https://cloud-images.ubuntu.com/releases/14.04/release/"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"filename": "ubuntu-cloud-init-data.iso",
|
||||||
|
"version": "1.0",
|
||||||
|
"md5sum": "328469100156ae8dbf262daa319c27ff",
|
||||||
|
"filesize": 131072,
|
||||||
|
"download_url": "https://sourceforge.net/projects/gns-3/files/Qemu%20Appliances/ubuntu-cloud-init-data.iso/download"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "17.10",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "ubuntu-17.10-server-cloudimg-amd64.img",
|
||||||
|
"cdrom_image": "ubuntu-cloud-init-data.iso"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "17.04",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "ubuntu-17.04-server-cloudimg-amd64.img",
|
||||||
|
"cdrom_image": "ubuntu-cloud-init-data.iso"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "16.04 (LTS)",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "ubuntu-16.04-server-cloudimg-amd64-disk1.img",
|
||||||
|
"cdrom_image": "ubuntu-cloud-init-data.iso"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "14.04 (LTS)",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "ubuntu-14.04-server-cloudimg-amd64-disk1.img",
|
||||||
|
"cdrom_image": "ubuntu-cloud-init-data.iso"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"name": "Ubuntu",
|
"name": "Ubuntu Docker Guest",
|
||||||
"category": "guest",
|
"category": "guest",
|
||||||
"description": "Ubuntu is a Debian-based Linux operating system, with Unity as its default desktop environment. It is based on free software and named after the Southern African philosophy of ubuntu (literally, \"human-ness\"), which often is translated as \"humanity towards others\" or \"the belief in a universal bond of sharing that connects all humanity\".",
|
"description": "Ubuntu is a Debian-based Linux operating system, with Unity as its default desktop environment. It is based on free software and named after the Southern African philosophy of ubuntu (literally, \"human-ness\"), which often is translated as \"humanity towards others\" or \"the belief in a universal bond of sharing that connects all humanity\".",
|
||||||
"vendor_name": "Canonical",
|
"vendor_name": "Canonical",
|
@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"name": "Ubuntu",
|
"name": "Ubuntu Desktop Guest",
|
||||||
"category": "guest",
|
"category": "guest",
|
||||||
"description": "Ubuntu is a full-featured Linux operating system which is based on Debian distribution and freely available with both community and professional support, it comes with Unity as its default desktop environment. There are other flavors of Ubuntu available with other desktops as default like Ubuntu Gnome, Lubuntu, Xubuntu, and so on. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away. A default installation of Ubuntu contains a wide range of software that includes LibreOffice, Firefox, Empathy, Transmission, etc.",
|
"description": "Ubuntu is a full-featured Linux operating system which is based on Debian distribution and freely available with both community and professional support, it comes with Unity as its default desktop environment. There are other flavors of Ubuntu available with other desktops as default like Ubuntu Gnome, Lubuntu, Xubuntu, and so on. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away. A default installation of Ubuntu contains a wide range of software that includes LibreOffice, Firefox, Empathy, Transmission, etc.",
|
||||||
"vendor_name": "Canonical Inc.",
|
"vendor_name": "Canonical Inc.",
|
||||||
|
@ -24,6 +24,13 @@
|
|||||||
"kvm": "allow"
|
"kvm": "allow"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "untangle_1320_x64.iso",
|
||||||
|
"version": "13.2.0",
|
||||||
|
"md5sum": "0ce2293acec0f37f1339e703653727f8",
|
||||||
|
"filesize": 768000000,
|
||||||
|
"download_url": "https://www.untangle.com/get-untangle/"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "untangle_1310_x64.iso",
|
"filename": "untangle_1310_x64.iso",
|
||||||
"version": "13.1.0",
|
"version": "13.1.0",
|
||||||
@ -90,6 +97,13 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"versions": [
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "13.2.0",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "empty30G.qcow2",
|
||||||
|
"cdrom_image": "untangle_1320_x64.iso"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "13.1.0",
|
"name": "13.1.0",
|
||||||
"images": {
|
"images": {
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"product_url": "https://www.microsoft.com/en-us/windows",
|
"product_url": "https://www.microsoft.com/en-us/windows",
|
||||||
"registry_version": 4,
|
"registry_version": 4,
|
||||||
"status": "stable",
|
"status": "stable",
|
||||||
|
"availability": "free-to-try",
|
||||||
"maintainer": "GNS3 Team",
|
"maintainer": "GNS3 Team",
|
||||||
"maintainer_email": "developers@gns3.net",
|
"maintainer_email": "developers@gns3.net",
|
||||||
"usage": "These virtual machines expire after 90 days; i.e. you have to re-create them in your project after this time but you don't have to re-import the appliance.\n\nDefault credentials: IEUser / Passw0rd!",
|
"usage": "These virtual machines expire after 90 days; i.e. you have to re-create them in your project after this time but you don't have to re-import the appliance.\n\nDefault credentials: IEUser / Passw0rd!",
|
||||||
@ -25,6 +26,13 @@
|
|||||||
"kvm": "require"
|
"kvm": "require"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
|
{
|
||||||
|
"filename": "MSEdge-Win10-VMWare-disk1.vmdk",
|
||||||
|
"version": "10 w/ Edge",
|
||||||
|
"md5sum": "fef74c69e1949480d4e2095324a169af",
|
||||||
|
"filesize": 5636608512,
|
||||||
|
"download_url": "https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"filename": "MSEdge_-_Win10_preview.vmdk",
|
"filename": "MSEdge_-_Win10_preview.vmdk",
|
||||||
"version": "10 w/ Edge",
|
"version": "10 w/ Edge",
|
||||||
@ -71,6 +79,12 @@
|
|||||||
"versions": [
|
"versions": [
|
||||||
{
|
{
|
||||||
"name": "10 w/ Edge",
|
"name": "10 w/ Edge",
|
||||||
|
"images": {
|
||||||
|
"hda_disk_image": "MSEdge-Win10-VMWare-disk1.vmdk"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "10 w/ Edge (Preview)",
|
||||||
"images": {
|
"images": {
|
||||||
"hda_disk_image": "MSEdge_-_Win10_preview.vmdk"
|
"hda_disk_image": "MSEdge_-_Win10_preview.vmdk"
|
||||||
}
|
}
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"product_url": "https://www.microsoft.com/en-us/windows",
|
"product_url": "https://www.microsoft.com/en-us/windows",
|
||||||
"registry_version": 4,
|
"registry_version": 4,
|
||||||
"status": "stable",
|
"status": "stable",
|
||||||
|
"availability": "free-to-try",
|
||||||
"maintainer": "GNS3 Team",
|
"maintainer": "GNS3 Team",
|
||||||
"maintainer_email": "developers@gns3.net",
|
"maintainer_email": "developers@gns3.net",
|
||||||
"symbol": "microsoft.svg",
|
"symbol": "microsoft.svg",
|
||||||
@ -21,7 +22,8 @@
|
|||||||
"arch": "x86_64",
|
"arch": "x86_64",
|
||||||
"console_type": "vnc",
|
"console_type": "vnc",
|
||||||
"boot_priority": "c",
|
"boot_priority": "c",
|
||||||
"kvm": "require"
|
"kvm": "require",
|
||||||
|
"options": "-usbdevice tablet"
|
||||||
},
|
},
|
||||||
"images": [
|
"images": [
|
||||||
{
|
{
|
||||||
|
@ -555,7 +555,7 @@ class BaseManager:
|
|||||||
# We store the file under his final name only when the upload is finished
|
# We store the file under his final name only when the upload is finished
|
||||||
tmp_path = path + ".tmp"
|
tmp_path = path + ".tmp"
|
||||||
os.makedirs(os.path.dirname(path), exist_ok=True)
|
os.makedirs(os.path.dirname(path), exist_ok=True)
|
||||||
with open(tmp_path, 'wb+') as f:
|
with open(tmp_path, 'wb') as f:
|
||||||
while True:
|
while True:
|
||||||
packet = yield from stream.read(4096)
|
packet = yield from stream.read(4096)
|
||||||
if not packet:
|
if not packet:
|
||||||
|
@ -25,6 +25,7 @@ import asyncio
|
|||||||
import tempfile
|
import tempfile
|
||||||
import psutil
|
import psutil
|
||||||
import platform
|
import platform
|
||||||
|
import re
|
||||||
|
|
||||||
from gns3server.utils.interfaces import interfaces
|
from gns3server.utils.interfaces import interfaces
|
||||||
from ..compute.port_manager import PortManager
|
from ..compute.port_manager import PortManager
|
||||||
@ -598,15 +599,24 @@ class BaseNode:
|
|||||||
@asyncio.coroutine
|
@asyncio.coroutine
|
||||||
def _ubridge_apply_filters(self, bridge_name, filters):
|
def _ubridge_apply_filters(self, bridge_name, filters):
|
||||||
"""
|
"""
|
||||||
Apply filter like rate limiting
|
Apply packet filters
|
||||||
|
|
||||||
:param bridge_name: bridge name in uBridge
|
:param bridge_name: bridge name in uBridge
|
||||||
:param filters: Array of filter dictionnary
|
:param filters: Array of filter dictionary
|
||||||
"""
|
"""
|
||||||
yield from self._ubridge_send('bridge reset_packet_filters ' + bridge_name)
|
yield from self._ubridge_send('bridge reset_packet_filters ' + bridge_name)
|
||||||
for filter in self._build_filter_list(filters):
|
for packet_filter in self._build_filter_list(filters):
|
||||||
cmd = 'bridge add_packet_filter {} {}'.format(bridge_name, filter)
|
cmd = 'bridge add_packet_filter {} {}'.format(bridge_name, packet_filter)
|
||||||
yield from self._ubridge_send(cmd)
|
try:
|
||||||
|
yield from self._ubridge_send(cmd)
|
||||||
|
except UbridgeError as e:
|
||||||
|
match = re.search("Cannot compile filter '(.*)': syntax error", str(e))
|
||||||
|
if match:
|
||||||
|
message = "Warning: ignoring BPF packet filter '{}' due to syntax error".format(self.name, match.group(1))
|
||||||
|
log.warning(message)
|
||||||
|
self.project.emit("log.warning", {"message": message})
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
def _build_filter_list(self, filters):
|
def _build_filter_list(self, filters):
|
||||||
"""
|
"""
|
||||||
|
@ -99,7 +99,7 @@ class Docker(BaseManager):
|
|||||||
|
|
||||||
:param method: HTTP method
|
:param method: HTTP method
|
||||||
:param path: Endpoint in API
|
:param path: Endpoint in API
|
||||||
:param data: Dictionnary with the body. Will be transformed to a JSON
|
:param data: Dictionary with the body. Will be transformed to a JSON
|
||||||
:param params: Parameters added as a query arg
|
:param params: Parameters added as a query arg
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -314,7 +314,7 @@ class DockerVM(BaseNode):
|
|||||||
params["Env"].append("GNS3_VOLUMES={}".format(":".join(self._volumes)))
|
params["Env"].append("GNS3_VOLUMES={}".format(":".join(self._volumes)))
|
||||||
|
|
||||||
if self._environment:
|
if self._environment:
|
||||||
for e in self._environment.split("\n"):
|
for e in self._environment.strip().split("\n"):
|
||||||
e = e.strip()
|
e = e.strip()
|
||||||
if not e.startswith("GNS3_"):
|
if not e.startswith("GNS3_"):
|
||||||
params["Env"].append(e)
|
params["Env"].append(e)
|
||||||
@ -352,7 +352,11 @@ class DockerVM(BaseNode):
|
|||||||
def start(self):
|
def start(self):
|
||||||
"""Starts this Docker container."""
|
"""Starts this Docker container."""
|
||||||
|
|
||||||
state = yield from self._get_container_state()
|
try:
|
||||||
|
state = yield from self._get_container_state()
|
||||||
|
except DockerHttp404Error:
|
||||||
|
raise DockerError("Docker container '{name}' with ID {cid} does not exist or is not ready yet. Please try again in a few seconds.".format(name=self.name,
|
||||||
|
cid=self._cid))
|
||||||
if state == "paused":
|
if state == "paused":
|
||||||
yield from self.unpause()
|
yield from self.unpause()
|
||||||
elif state == "running":
|
elif state == "running":
|
||||||
|
@ -547,7 +547,7 @@ class Dynamips(BaseManager):
|
|||||||
content = content.replace('%h', vm.name)
|
content = content.replace('%h', vm.name)
|
||||||
f.write(content.encode("utf-8"))
|
f.write(content.encode("utf-8"))
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
raise DynamipsError("Could not create config file {}: {}".format(path, e))
|
raise DynamipsError("Could not create config file '{}': {}".format(path, e))
|
||||||
|
|
||||||
return os.path.join("configs", os.path.basename(path))
|
return os.path.join("configs", os.path.basename(path))
|
||||||
|
|
||||||
|
@ -260,6 +260,8 @@ class DynamipsHypervisor:
|
|||||||
# Now retrieve the result
|
# Now retrieve the result
|
||||||
data = []
|
data = []
|
||||||
buf = ''
|
buf = ''
|
||||||
|
retries = 0
|
||||||
|
max_retries = 10
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
try:
|
try:
|
||||||
@ -276,8 +278,14 @@ class DynamipsHypervisor:
|
|||||||
log.warning("Connection reset received while reading Dynamips response: {}".format(e))
|
log.warning("Connection reset received while reading Dynamips response: {}".format(e))
|
||||||
continue
|
continue
|
||||||
if not chunk:
|
if not chunk:
|
||||||
raise DynamipsError("No data returned from {host}:{port}, Dynamips process running: {run}"
|
if retries > max_retries:
|
||||||
.format(host=self._host, port=self._port, run=self.is_running()))
|
raise DynamipsError("No data returned from {host}:{port}, Dynamips process running: {run}"
|
||||||
|
.format(host=self._host, port=self._port, run=self.is_running()))
|
||||||
|
else:
|
||||||
|
retries += 1
|
||||||
|
yield from asyncio.sleep(0.1)
|
||||||
|
continue
|
||||||
|
retries = 0
|
||||||
buf += chunk.decode("utf-8", errors="ignore")
|
buf += chunk.decode("utf-8", errors="ignore")
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
raise DynamipsError("Could not read response for '{command}' from {host}:{port}: {error}, process running: {run}"
|
raise DynamipsError("Could not read response for '{command}' from {host}:{port}: {error}, process running: {run}"
|
||||||
|
@ -283,12 +283,15 @@ class Router(BaseNode):
|
|||||||
if not self._ghost_flag:
|
if not self._ghost_flag:
|
||||||
self.check_available_ram(self.ram)
|
self.check_available_ram(self.ram)
|
||||||
|
|
||||||
|
# config paths are relative to the working directory configured on Dynamips hypervisor
|
||||||
startup_config_path = os.path.join("configs", "i{}_startup-config.cfg".format(self._dynamips_id))
|
startup_config_path = os.path.join("configs", "i{}_startup-config.cfg".format(self._dynamips_id))
|
||||||
private_config_path = os.path.join("configs", "i{}_private-config.cfg".format(self._dynamips_id))
|
private_config_path = os.path.join("configs", "i{}_private-config.cfg".format(self._dynamips_id))
|
||||||
|
|
||||||
if not os.path.exists(private_config_path) or not os.path.getsize(private_config_path):
|
if not os.path.exists(os.path.join(self._working_directory, private_config_path)) or \
|
||||||
|
not os.path.getsize(os.path.join(self._working_directory, private_config_path)):
|
||||||
# an empty private-config can prevent a router to boot.
|
# an empty private-config can prevent a router to boot.
|
||||||
private_config_path = ''
|
private_config_path = ''
|
||||||
|
|
||||||
yield from self._hypervisor.send('vm set_config "{name}" "{startup}" "{private}"'.format(
|
yield from self._hypervisor.send('vm set_config "{name}" "{startup}" "{private}"'.format(
|
||||||
name=self._name,
|
name=self._name,
|
||||||
startup=startup_config_path,
|
startup=startup_config_path,
|
||||||
|
@ -25,6 +25,7 @@ import asyncio
|
|||||||
from ..base_manager import BaseManager
|
from ..base_manager import BaseManager
|
||||||
from .iou_error import IOUError
|
from .iou_error import IOUError
|
||||||
from .iou_vm import IOUVM
|
from .iou_vm import IOUVM
|
||||||
|
from .utils.application_id import get_next_application_id
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
@ -38,8 +39,7 @@ class IOU(BaseManager):
|
|||||||
def __init__(self):
|
def __init__(self):
|
||||||
|
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self._free_application_ids = list(range(1, 512))
|
self._iou_id_lock = asyncio.Lock()
|
||||||
self._used_application_ids = {}
|
|
||||||
|
|
||||||
@asyncio.coroutine
|
@asyncio.coroutine
|
||||||
def create_node(self, *args, **kwargs):
|
def create_node(self, *args, **kwargs):
|
||||||
@ -49,40 +49,14 @@ class IOU(BaseManager):
|
|||||||
:returns: IOUVM instance
|
:returns: IOUVM instance
|
||||||
"""
|
"""
|
||||||
|
|
||||||
node = yield from super().create_node(*args, **kwargs)
|
with (yield from self._iou_id_lock):
|
||||||
try:
|
# wait for a node to be completely created before adding a new one
|
||||||
self._used_application_ids[node.id] = self._free_application_ids.pop(0)
|
# this is important otherwise we allocate the same application ID
|
||||||
except IndexError:
|
# when creating multiple IOU node at the same time
|
||||||
raise IOUError("Cannot create a new IOU VM (limit of 512 VMs reached on this host)")
|
application_id = get_next_application_id(self.nodes)
|
||||||
|
node = yield from super().create_node(*args, application_id=application_id, **kwargs)
|
||||||
return node
|
return node
|
||||||
|
|
||||||
@asyncio.coroutine
|
|
||||||
def close_node(self, node_id, *args, **kwargs):
|
|
||||||
"""
|
|
||||||
Closes an IOU VM.
|
|
||||||
|
|
||||||
:returns: IOUVM instance
|
|
||||||
"""
|
|
||||||
|
|
||||||
node = self.get_node(node_id)
|
|
||||||
if node_id in self._used_application_ids:
|
|
||||||
i = self._used_application_ids[node_id]
|
|
||||||
self._free_application_ids.insert(0, i)
|
|
||||||
del self._used_application_ids[node_id]
|
|
||||||
yield from super().close_node(node_id, *args, **kwargs)
|
|
||||||
return node
|
|
||||||
|
|
||||||
def get_application_id(self, node_id):
|
|
||||||
"""
|
|
||||||
Get an unique application identifier for IOU.
|
|
||||||
|
|
||||||
:param node_id: Node identifier
|
|
||||||
|
|
||||||
:returns: IOU application identifier
|
|
||||||
"""
|
|
||||||
|
|
||||||
return self._used_application_ids.get(node_id, 1)
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_legacy_vm_workdir(legacy_vm_id, name):
|
def get_legacy_vm_workdir(legacy_vm_id, name):
|
||||||
"""
|
"""
|
||||||
|
@ -65,7 +65,7 @@ class IOUVM(BaseNode):
|
|||||||
:param console: TCP console port
|
:param console: TCP console port
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, name, node_id, project, manager, path=None, console=None):
|
def __init__(self, name, node_id, project, manager, application_id=None, path=None, console=None):
|
||||||
|
|
||||||
super().__init__(name, node_id, project, manager, console=console)
|
super().__init__(name, node_id, project, manager, console=console)
|
||||||
|
|
||||||
@ -86,7 +86,7 @@ class IOUVM(BaseNode):
|
|||||||
self._startup_config = ""
|
self._startup_config = ""
|
||||||
self._private_config = ""
|
self._private_config = ""
|
||||||
self._ram = 256 # Megabytes
|
self._ram = 256 # Megabytes
|
||||||
self._application_id = None
|
self._application_id = application_id
|
||||||
self._l1_keepalives = False # used to overcome the always-up Ethernet interfaces (not supported by all IOSes).
|
self._l1_keepalives = False # used to overcome the always-up Ethernet interfaces (not supported by all IOSes).
|
||||||
|
|
||||||
def _config(self):
|
def _config(self):
|
||||||
@ -348,14 +348,14 @@ class IOUVM(BaseNode):
|
|||||||
# reload
|
# reload
|
||||||
path = os.path.join(os.path.expanduser("~/"), ".iourc")
|
path = os.path.join(os.path.expanduser("~/"), ".iourc")
|
||||||
try:
|
try:
|
||||||
with open(path, "wb+") as f:
|
with open(path, "wb") as f:
|
||||||
f.write(value.encode("utf-8"))
|
f.write(value.encode("utf-8"))
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
raise IOUError("Could not write the iourc file {}: {}".format(path, e))
|
raise IOUError("Could not write the iourc file {}: {}".format(path, e))
|
||||||
|
|
||||||
path = os.path.join(self.temporary_directory, "iourc")
|
path = os.path.join(self.temporary_directory, "iourc")
|
||||||
try:
|
try:
|
||||||
with open(path, "wb+") as f:
|
with open(path, "wb") as f:
|
||||||
f.write(value.encode("utf-8"))
|
f.write(value.encode("utf-8"))
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
raise IOUError("Could not write the iourc file {}: {}".format(path, e))
|
raise IOUError("Could not write the iourc file {}: {}".format(path, e))
|
||||||
@ -1141,9 +1141,7 @@ class IOUVM(BaseNode):
|
|||||||
|
|
||||||
:returns: integer between 1 and 512
|
:returns: integer between 1 and 512
|
||||||
"""
|
"""
|
||||||
if self._application_id is None:
|
|
||||||
#FIXME: is this necessary? application ID is allocated by controller and should not be None
|
|
||||||
return self._manager.get_application_id(self.id)
|
|
||||||
return self._application_id
|
return self._application_id
|
||||||
|
|
||||||
@application_id.setter
|
@application_id.setter
|
||||||
|
@ -24,13 +24,14 @@ log = logging.getLogger(__name__)
|
|||||||
def get_next_application_id(nodes):
|
def get_next_application_id(nodes):
|
||||||
"""
|
"""
|
||||||
Calculates free application_id from given nodes
|
Calculates free application_id from given nodes
|
||||||
|
|
||||||
:param nodes:
|
:param nodes:
|
||||||
:raises IOUError when exceeds number
|
:raises IOUError when exceeds number
|
||||||
:return: integer first free id
|
:return: integer first free id
|
||||||
"""
|
"""
|
||||||
used = set([n.properties.get('application_id') for n in nodes if n.node_type == 'iou'])
|
used = set([n.application_id for n in nodes])
|
||||||
pool = set(range(1, 512))
|
pool = set(range(1, 512))
|
||||||
try:
|
try:
|
||||||
return (pool - used).pop()
|
return (pool - used).pop()
|
||||||
except KeyError:
|
except KeyError:
|
||||||
raise IOUError("Cannot create a new IOU VM (limit of 512 VMs reached)")
|
raise IOUError("Cannot create a new IOU VM (limit of 512 VMs on one host reached)")
|
||||||
|
@ -533,7 +533,7 @@ class QemuVM(BaseNode):
|
|||||||
|
|
||||||
if not mac_address:
|
if not mac_address:
|
||||||
# use the node UUID to generate a random MAC address
|
# use the node UUID to generate a random MAC address
|
||||||
self._mac_address = "00:%s:%s:%s:%s:00" % (self.project.id[-4:-2], self.project.id[-2:], self.id[-4:-2], self.id[-2:])
|
self._mac_address = "52:%s:%s:%s:%s:00" % (self.project.id[-4:-2], self.project.id[-2:], self.id[-4:-2], self.id[-2:])
|
||||||
else:
|
else:
|
||||||
self._mac_address = mac_address
|
self._mac_address = mac_address
|
||||||
|
|
||||||
|
@ -397,12 +397,12 @@ class VMware(BaseManager):
|
|||||||
try:
|
try:
|
||||||
stdout_data, _ = yield from asyncio.wait_for(process.communicate(), timeout=timeout)
|
stdout_data, _ = yield from asyncio.wait_for(process.communicate(), timeout=timeout)
|
||||||
except asyncio.TimeoutError:
|
except asyncio.TimeoutError:
|
||||||
raise VMwareError("vmrun has timed out after {} seconds!\nTry to run {} in a terminal to see more informations.\n\nMake sure GNS3 and VMware run under the same user and whitelist vmrun.exe in your antivirus.".format(timeout, command_string))
|
raise VMwareError("vmrun has timed out after {} seconds!\nTry to run {} in a terminal to see more details.\n\nMake sure GNS3 and VMware run under the same user and whitelist vmrun.exe in your antivirus.".format(timeout, command_string))
|
||||||
|
|
||||||
if process.returncode:
|
if process.returncode:
|
||||||
# vmrun print errors on stdout
|
# vmrun print errors on stdout
|
||||||
vmrun_error = stdout_data.decode("utf-8", errors="ignore")
|
vmrun_error = stdout_data.decode("utf-8", errors="ignore")
|
||||||
raise VMwareError("vmrun has returned an error: {}\nTry to run {} in a terminal to see more informations.\nAnd make sure GNS3 and VMware run under the same user.".format(vmrun_error, command_string))
|
raise VMwareError("vmrun has returned an error: {}\nTry to run {} in a terminal to see more details.\nAnd make sure GNS3 and VMware run under the same user.".format(vmrun_error, command_string))
|
||||||
|
|
||||||
return stdout_data.decode("utf-8", errors="ignore").splitlines()
|
return stdout_data.decode("utf-8", errors="ignore").splitlines()
|
||||||
|
|
||||||
|
@ -745,7 +745,7 @@ class VMwareVM(BaseNode):
|
|||||||
"Please remove it or allow VMware VM '{name}' to use any adapter.".format(attachment=self._vmx_pairs[connection_type],
|
"Please remove it or allow VMware VM '{name}' to use any adapter.".format(attachment=self._vmx_pairs[connection_type],
|
||||||
adapter_number=adapter_number,
|
adapter_number=adapter_number,
|
||||||
name=self.name))
|
name=self.name))
|
||||||
elif self.is_running():
|
elif (yield from self.is_running()):
|
||||||
raise VMwareError("Attachment '{attachment}' is configured on network adapter {adapter_number}. "
|
raise VMwareError("Attachment '{attachment}' is configured on network adapter {adapter_number}. "
|
||||||
"Please stop VMware VM '{name}' to link to this adapter and allow GNS3 to change the attachment type.".format(attachment=self._vmx_pairs[connection_type],
|
"Please stop VMware VM '{name}' to link to this adapter and allow GNS3 to change the attachment type.".format(attachment=self._vmx_pairs[connection_type],
|
||||||
adapter_number=adapter_number,
|
adapter_number=adapter_number,
|
||||||
|
@ -82,7 +82,8 @@ class Controller:
|
|||||||
if appliance.status != 'broken':
|
if appliance.status != 'broken':
|
||||||
self._appliance_templates[appliance.id] = appliance
|
self._appliance_templates[appliance.id] = appliance
|
||||||
except (ValueError, OSError, KeyError) as e:
|
except (ValueError, OSError, KeyError) as e:
|
||||||
log.warning("Can't load %s: %s", path, str(e))
|
log.warning("Cannot load appliance template file '%s': %s", path, str(e))
|
||||||
|
continue
|
||||||
|
|
||||||
self._appliances = {}
|
self._appliances = {}
|
||||||
vms = []
|
vms = []
|
||||||
@ -122,13 +123,27 @@ class Controller:
|
|||||||
for prop in vm.copy():
|
for prop in vm.copy():
|
||||||
if prop in ["enable_remote_console", "use_ubridge"]:
|
if prop in ["enable_remote_console", "use_ubridge"]:
|
||||||
del vm[prop]
|
del vm[prop]
|
||||||
|
|
||||||
|
# remove deprecated default_symbol and hover_symbol
|
||||||
|
# and set symbol if not present
|
||||||
|
deprecated = ["default_symbol", "hover_symbol"]
|
||||||
|
if len([prop for prop in vm.keys() if prop in deprecated]) > 0:
|
||||||
|
if "default_symbol" in vm.keys():
|
||||||
|
del vm["default_symbol"]
|
||||||
|
if "hover_symbol" in vm.keys():
|
||||||
|
del vm["hover_symbol"]
|
||||||
|
|
||||||
|
if "symbol" not in vm.keys():
|
||||||
|
vm["symbol"] = ":/symbols/computer.svg"
|
||||||
|
|
||||||
vm.setdefault("appliance_id", str(uuid.uuid4()))
|
vm.setdefault("appliance_id", str(uuid.uuid4()))
|
||||||
try:
|
try:
|
||||||
appliance = Appliance(vm["appliance_id"], vm)
|
appliance = Appliance(vm["appliance_id"], vm)
|
||||||
|
appliance.__json__() # Check if loaded without error
|
||||||
self._appliances[appliance.id] = appliance
|
self._appliances[appliance.id] = appliance
|
||||||
except KeyError as e:
|
except KeyError as e:
|
||||||
# appliance data is not complete (missing name or type)
|
# appliance data is not complete (missing name or type)
|
||||||
log.warning("Could not load appliance template {} ('{}'): {}".format(vm["appliance_id"], vm.get("name", "unknown"), e))
|
log.warning("Cannot load appliance template {} ('{}'): missing key {}".format(vm["appliance_id"], vm.get("name", "unknown"), e))
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Add builtins
|
# Add builtins
|
||||||
|
@ -44,7 +44,9 @@ class Appliance:
|
|||||||
# Version of the gui before 2.1 use linked_base
|
# Version of the gui before 2.1 use linked_base
|
||||||
# and the server linked_clone
|
# and the server linked_clone
|
||||||
if "linked_base" in self._data:
|
if "linked_base" in self._data:
|
||||||
self._data["linked_clone"] = self._data.pop("linked_base")
|
linked_base = self._data.pop("linked_base")
|
||||||
|
if "linked_clone" not in self._data:
|
||||||
|
self._data["linked_clone"] = linked_base
|
||||||
if data["node_type"] == "iou" and "image" in data:
|
if data["node_type"] == "iou" and "image" in data:
|
||||||
del self._data["image"]
|
del self._data["image"]
|
||||||
self._builtin = builtin
|
self._builtin = builtin
|
||||||
|
@ -118,7 +118,7 @@ class Drawing:
|
|||||||
|
|
||||||
file_path = os.path.join(self._project.pictures_directory, filename)
|
file_path = os.path.join(self._project.pictures_directory, filename)
|
||||||
if not os.path.exists(file_path):
|
if not os.path.exists(file_path):
|
||||||
with open(file_path, "wb+") as f:
|
with open(file_path, "wb") as f:
|
||||||
f.write(data)
|
f.write(data)
|
||||||
value = filename
|
value = filename
|
||||||
|
|
||||||
|
@ -64,7 +64,6 @@ def export_project(project, temporary_dir, include_images=False, keep_compute_id
|
|||||||
|
|
||||||
for root, dirs, files in os.walk(project._path, topdown=True):
|
for root, dirs, files in os.walk(project._path, topdown=True):
|
||||||
files = [f for f in files if not _filter_files(os.path.join(root, f))]
|
files = [f for f in files if not _filter_files(os.path.join(root, f))]
|
||||||
|
|
||||||
for file in files:
|
for file in files:
|
||||||
path = os.path.join(root, file)
|
path = os.path.join(root, file)
|
||||||
# Try open the file
|
# Try open the file
|
||||||
@ -162,9 +161,12 @@ def _export_project_file(project, path, z, include_images, keep_compute_id, allo
|
|||||||
|
|
||||||
if "properties" in node and node["node_type"] != "docker":
|
if "properties" in node and node["node_type"] != "docker":
|
||||||
for prop, value in node["properties"].items():
|
for prop, value in node["properties"].items():
|
||||||
if not prop.endswith("image"):
|
|
||||||
continue
|
|
||||||
|
|
||||||
|
if node["node_type"] == "iou":
|
||||||
|
if not prop == "path":
|
||||||
|
continue
|
||||||
|
elif not prop.endswith("image"):
|
||||||
|
continue
|
||||||
if value is None or value.strip() == '':
|
if value is None or value.strip() == '':
|
||||||
continue
|
continue
|
||||||
|
|
||||||
@ -214,7 +216,6 @@ def _export_local_images(project, image, z):
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
directory = os.path.split(img_directory)[-1:][0]
|
directory = os.path.split(img_directory)[-1:][0]
|
||||||
|
|
||||||
if os.path.exists(image):
|
if os.path.exists(image):
|
||||||
path = image
|
path = image
|
||||||
else:
|
else:
|
||||||
@ -265,4 +266,3 @@ def _export_remote_images(project, compute_id, image_type, image, project_zipfil
|
|||||||
arcname = os.path.join("images", image_type, image)
|
arcname = os.path.join("images", image_type, image)
|
||||||
log.info("Saved {}".format(arcname))
|
log.info("Saved {}".format(arcname))
|
||||||
project_zipfile.write(temp_path, arcname=arcname, compress_type=zipfile.ZIP_DEFLATED)
|
project_zipfile.write(temp_path, arcname=arcname, compress_type=zipfile.ZIP_DEFLATED)
|
||||||
|
|
||||||
|
@ -173,12 +173,16 @@ class Link:
|
|||||||
self._filters = new_filters
|
self._filters = new_filters
|
||||||
if self._created:
|
if self._created:
|
||||||
yield from self.update()
|
yield from self.update()
|
||||||
|
self._project.controller.notification.emit("link.updated", self.__json__())
|
||||||
|
self._project.dump()
|
||||||
|
|
||||||
@asyncio.coroutine
|
@asyncio.coroutine
|
||||||
def update_suspend(self, value):
|
def update_suspend(self, value):
|
||||||
if value != self._suspend:
|
if value != self._suspend:
|
||||||
self._suspend = value
|
self._suspend = value
|
||||||
yield from self.update()
|
yield from self.update()
|
||||||
|
self._project.controller.notification.emit("link.updated", self.__json__())
|
||||||
|
self._project.dump()
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def created(self):
|
def created(self):
|
||||||
@ -196,6 +200,8 @@ class Link:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
port = node.get_port(adapter_number, port_number)
|
port = node.get_port(adapter_number, port_number)
|
||||||
|
if port is None:
|
||||||
|
raise aiohttp.web.HTTPNotFound(text="Port {}/{} for {} not found".format(adapter_number, port_number, node.name))
|
||||||
if port.link is not None:
|
if port.link is not None:
|
||||||
raise aiohttp.web.HTTPConflict(text="Port is already used")
|
raise aiohttp.web.HTTPConflict(text="Port is already used")
|
||||||
|
|
||||||
@ -211,6 +217,8 @@ class Link:
|
|||||||
|
|
||||||
# Check if user is not connecting serial => ethernet
|
# Check if user is not connecting serial => ethernet
|
||||||
other_port = other_node["node"].get_port(other_node["adapter_number"], other_node["port_number"])
|
other_port = other_node["node"].get_port(other_node["adapter_number"], other_node["port_number"])
|
||||||
|
if other_port is None:
|
||||||
|
raise aiohttp.web.HTTPNotFound(text="Port {}/{} for {} not found".format(other_node["adapter_number"], other_node["port_number"], other_node["node"].name))
|
||||||
if port.link_type != other_port.link_type:
|
if port.link_type != other_port.link_type:
|
||||||
raise aiohttp.web.HTTPConflict(text="It's not allowed to connect a {} to a {}".format(other_port.link_type, port.link_type))
|
raise aiohttp.web.HTTPConflict(text="It's not allowed to connect a {} to a {}".format(other_port.link_type, port.link_type))
|
||||||
|
|
||||||
@ -299,6 +307,12 @@ class Link:
|
|||||||
Dump a pcap file on disk
|
Dump a pcap file on disk
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
if os.path.exists(self.capture_file_path):
|
||||||
|
try:
|
||||||
|
os.remove(self.capture_file_path)
|
||||||
|
except OSError as e:
|
||||||
|
raise aiohttp.web.HTTPConflict(text="Could not delete old capture file '{}': {}".format(self.capture_file_path, e))
|
||||||
|
|
||||||
try:
|
try:
|
||||||
stream_content = yield from self.read_pcap_from_source()
|
stream_content = yield from self.read_pcap_from_source()
|
||||||
except aiohttp.web.HTTPException as e:
|
except aiohttp.web.HTTPException as e:
|
||||||
@ -309,16 +323,19 @@ class Link:
|
|||||||
self._project.controller.notification.emit("link.updated", self.__json__())
|
self._project.controller.notification.emit("link.updated", self.__json__())
|
||||||
|
|
||||||
with stream_content as stream:
|
with stream_content as stream:
|
||||||
with open(self.capture_file_path, "wb+") as f:
|
try:
|
||||||
while self._capturing:
|
with open(self.capture_file_path, "wb") as f:
|
||||||
# We read 1 bytes by 1 otherwise the remaining data is not read if the traffic stops
|
while self._capturing:
|
||||||
data = yield from stream.read(1)
|
# We read 1 bytes by 1 otherwise the remaining data is not read if the traffic stops
|
||||||
if data:
|
data = yield from stream.read(1)
|
||||||
f.write(data)
|
if data:
|
||||||
# Flush to disk otherwise the live is not really live
|
f.write(data)
|
||||||
f.flush()
|
# Flush to disk otherwise the live is not really live
|
||||||
else:
|
f.flush()
|
||||||
break
|
else:
|
||||||
|
break
|
||||||
|
except OSError as e:
|
||||||
|
raise aiohttp.web.HTTPConflict(text="Could not write capture file '{}': {}".format(self.capture_file_path, e))
|
||||||
|
|
||||||
@asyncio.coroutine
|
@asyncio.coroutine
|
||||||
def stop_capture(self):
|
def stop_capture(self):
|
||||||
|
@ -574,12 +574,12 @@ class Node:
|
|||||||
def get_port(self, adapter_number, port_number):
|
def get_port(self, adapter_number, port_number):
|
||||||
"""
|
"""
|
||||||
Return the port for this adapter_number and port_number
|
Return the port for this adapter_number and port_number
|
||||||
or raise an HTTPNotFound
|
or returns None if the port is not found
|
||||||
"""
|
"""
|
||||||
for port in self.ports:
|
for port in self.ports:
|
||||||
if port.adapter_number == adapter_number and port.port_number == port_number:
|
if port.adapter_number == adapter_number and port.port_number == port_number:
|
||||||
return port
|
return port
|
||||||
raise aiohttp.web.HTTPNotFound(text="Port {}/{} for {} not found".format(adapter_number, port_number, self.name))
|
return None
|
||||||
|
|
||||||
def _list_ports(self):
|
def _list_ports(self):
|
||||||
"""
|
"""
|
||||||
|
@ -39,7 +39,6 @@ from ..utils.asyncio.pool import Pool
|
|||||||
from ..utils.asyncio import locked_coroutine, asyncio_ensure_future
|
from ..utils.asyncio import locked_coroutine, asyncio_ensure_future
|
||||||
from .export_project import export_project
|
from .export_project import export_project
|
||||||
from .import_project import import_project
|
from .import_project import import_project
|
||||||
from ..compute.iou.utils.application_id import get_next_application_id
|
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
@ -86,7 +85,6 @@ class Project:
|
|||||||
self._show_grid = show_grid
|
self._show_grid = show_grid
|
||||||
self._show_interface_labels = show_interface_labels
|
self._show_interface_labels = show_interface_labels
|
||||||
self._loading = False
|
self._loading = False
|
||||||
self._add_node_lock = asyncio.Lock()
|
|
||||||
|
|
||||||
# Disallow overwrite of existing project
|
# Disallow overwrite of existing project
|
||||||
if project_id is None and path is not None:
|
if project_id is None and path is not None:
|
||||||
@ -440,34 +438,27 @@ class Project:
|
|||||||
if node_id in self._nodes:
|
if node_id in self._nodes:
|
||||||
return self._nodes[node_id]
|
return self._nodes[node_id]
|
||||||
|
|
||||||
with (yield from self._add_node_lock):
|
node = Node(self, compute, name, node_id=node_id, node_type=node_type, **kwargs)
|
||||||
# wait for a node to be completely created before adding a new one
|
if compute not in self._project_created_on_compute:
|
||||||
# this is important otherwise we allocate the same application ID
|
# For a local server we send the project path
|
||||||
# when creating multiple IOU node at the same time
|
if compute.id == "local":
|
||||||
if node_type == "iou" and 'application_id' not in kwargs.keys():
|
yield from compute.post("/projects", data={
|
||||||
kwargs['application_id'] = get_next_application_id(self._nodes.values())
|
"name": self._name,
|
||||||
|
"project_id": self._id,
|
||||||
|
"path": self._path
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
yield from compute.post("/projects", data={
|
||||||
|
"name": self._name,
|
||||||
|
"project_id": self._id,
|
||||||
|
})
|
||||||
|
|
||||||
node = Node(self, compute, name, node_id=node_id, node_type=node_type, **kwargs)
|
self._project_created_on_compute.add(compute)
|
||||||
if compute not in self._project_created_on_compute:
|
yield from node.create()
|
||||||
# For a local server we send the project path
|
self._nodes[node.id] = node
|
||||||
if compute.id == "local":
|
self.controller.notification.emit("node.created", node.__json__())
|
||||||
yield from compute.post("/projects", data={
|
if dump:
|
||||||
"name": self._name,
|
self.dump()
|
||||||
"project_id": self._id,
|
|
||||||
"path": self._path
|
|
||||||
})
|
|
||||||
else:
|
|
||||||
yield from compute.post("/projects", data={
|
|
||||||
"name": self._name,
|
|
||||||
"project_id": self._id,
|
|
||||||
})
|
|
||||||
|
|
||||||
self._project_created_on_compute.add(compute)
|
|
||||||
yield from node.create()
|
|
||||||
self._nodes[node.id] = node
|
|
||||||
self.controller.notification.emit("node.created", node.__json__())
|
|
||||||
if dump:
|
|
||||||
self.dump()
|
|
||||||
return node
|
return node
|
||||||
|
|
||||||
@locked_coroutine
|
@locked_coroutine
|
||||||
@ -667,9 +658,12 @@ class Project:
|
|||||||
|
|
||||||
with tempfile.TemporaryDirectory() as tmpdir:
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
zipstream = yield from export_project(self, tmpdir, keep_compute_id=True, allow_all_nodes=True)
|
zipstream = yield from export_project(self, tmpdir, keep_compute_id=True, allow_all_nodes=True)
|
||||||
with open(snapshot.path, "wb+") as f:
|
try:
|
||||||
for data in zipstream:
|
with open(snapshot.path, "wb") as f:
|
||||||
f.write(data)
|
for data in zipstream:
|
||||||
|
f.write(data)
|
||||||
|
except OSError as e:
|
||||||
|
raise aiohttp.web.HTTPConflict(text="Could not write snapshot file '{}': {}".format(snapshot.path, e))
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
raise aiohttp.web.HTTPInternalServerError(text="Could not create project directory: {}".format(e))
|
raise aiohttp.web.HTTPInternalServerError(text="Could not create project directory: {}".format(e))
|
||||||
|
|
||||||
@ -826,8 +820,11 @@ class Project:
|
|||||||
for node_link in link_data["nodes"]:
|
for node_link in link_data["nodes"]:
|
||||||
node = self.get_node(node_link["node_id"])
|
node = self.get_node(node_link["node_id"])
|
||||||
port = node.get_port(node_link["adapter_number"], node_link["port_number"])
|
port = node.get_port(node_link["adapter_number"], node_link["port_number"])
|
||||||
|
if port is None:
|
||||||
|
log.warning("Port {}/{} for {} not found".format(node_link["adapter_number"], node_link["port_number"], node.name))
|
||||||
|
continue
|
||||||
if port.link is not None:
|
if port.link is not None:
|
||||||
# the node port is already attached to another link
|
log.warning("Port {}/{} is already connected to link ID {}".format(node_link["adapter_number"], node_link["port_number"], port.link.id))
|
||||||
continue
|
continue
|
||||||
yield from link.add_node(node, node_link["adapter_number"], node_link["port_number"], label=node_link.get("label"), dump=False)
|
yield from link.add_node(node, node_link["adapter_number"], node_link["port_number"], label=node_link.get("label"), dump=False)
|
||||||
if len(link.nodes) != 2:
|
if len(link.nodes) != 2:
|
||||||
@ -899,7 +896,7 @@ class Project:
|
|||||||
try:
|
try:
|
||||||
with tempfile.TemporaryDirectory() as tmpdir:
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
zipstream = yield from export_project(self, tmpdir, keep_compute_id=True, allow_all_nodes=True)
|
zipstream = yield from export_project(self, tmpdir, keep_compute_id=True, allow_all_nodes=True)
|
||||||
with open(os.path.join(tmpdir, "project.gns3p"), "wb+") as f:
|
with open(os.path.join(tmpdir, "project.gns3p"), "wb") as f:
|
||||||
for data in zipstream:
|
for data in zipstream:
|
||||||
f.write(data)
|
f.write(data)
|
||||||
with open(os.path.join(tmpdir, "project.gns3p"), "rb") as f:
|
with open(os.path.join(tmpdir, "project.gns3p"), "rb") as f:
|
||||||
|
@ -65,6 +65,8 @@ class IOUHandler:
|
|||||||
|
|
||||||
for name, value in request.json.items():
|
for name, value in request.json.items():
|
||||||
if hasattr(vm, name) and getattr(vm, name) != value:
|
if hasattr(vm, name) and getattr(vm, name) != value:
|
||||||
|
if name == "application_id":
|
||||||
|
continue # we must ignore this to avoid overwriting the application_id allocated by the IOU manager
|
||||||
if name == "startup_config_content" and (vm.startup_config_content and len(vm.startup_config_content) > 0):
|
if name == "startup_config_content" and (vm.startup_config_content and len(vm.startup_config_content) > 0):
|
||||||
continue
|
continue
|
||||||
if name == "private_config_content" and (vm.private_config_content and len(vm.private_config_content) > 0):
|
if name == "private_config_content" and (vm.private_config_content and len(vm.private_config_content) > 0):
|
||||||
@ -116,6 +118,8 @@ class IOUHandler:
|
|||||||
|
|
||||||
for name, value in request.json.items():
|
for name, value in request.json.items():
|
||||||
if hasattr(vm, name) and getattr(vm, name) != value:
|
if hasattr(vm, name) and getattr(vm, name) != value:
|
||||||
|
if name == "application_id":
|
||||||
|
continue # we must ignore this to avoid overwriting the application_id allocated by the IOU manager
|
||||||
setattr(vm, name, value)
|
setattr(vm, name, value)
|
||||||
|
|
||||||
if vm.use_default_iou_values:
|
if vm.use_default_iou_values:
|
||||||
|
@ -19,12 +19,12 @@ import aiohttp
|
|||||||
import asyncio
|
import asyncio
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import psutil
|
|
||||||
import tempfile
|
import tempfile
|
||||||
|
|
||||||
from gns3server.web.route import Route
|
from gns3server.web.route import Route
|
||||||
from gns3server.compute.project_manager import ProjectManager
|
from gns3server.compute.project_manager import ProjectManager
|
||||||
from gns3server.compute import MODULES
|
from gns3server.compute import MODULES
|
||||||
|
from gns3server.utils.ping_stats import PingStats
|
||||||
|
|
||||||
from gns3server.schemas.project import (
|
from gns3server.schemas.project import (
|
||||||
PROJECT_OBJECT_SCHEMA,
|
PROJECT_OBJECT_SCHEMA,
|
||||||
@ -186,11 +186,7 @@ class ProjectHandler:
|
|||||||
|
|
||||||
:returns: hash
|
:returns: hash
|
||||||
"""
|
"""
|
||||||
stats = {}
|
return {"action": "ping", "event": PingStats.get()}
|
||||||
# Non blocking call in order to get cpu usage. First call will return 0
|
|
||||||
stats["cpu_usage_percent"] = psutil.cpu_percent(interval=None)
|
|
||||||
stats["memory_usage_percent"] = psutil.virtual_memory().percent
|
|
||||||
return {"action": "ping", "event": stats}
|
|
||||||
|
|
||||||
@Route.get(
|
@Route.get(
|
||||||
r"/projects/{project_id}/files",
|
r"/projects/{project_id}/files",
|
||||||
|
@ -16,6 +16,7 @@
|
|||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
import os
|
import os
|
||||||
|
import aiohttp
|
||||||
from gns3server.web.route import Route
|
from gns3server.web.route import Route
|
||||||
from gns3server.controller import Controller
|
from gns3server.controller import Controller
|
||||||
|
|
||||||
@ -62,12 +63,15 @@ class SymbolHandler:
|
|||||||
def upload(request, response):
|
def upload(request, response):
|
||||||
controller = Controller.instance()
|
controller = Controller.instance()
|
||||||
path = os.path.join(controller.symbols.symbols_path(), os.path.basename(request.match_info["symbol_id"]))
|
path = os.path.join(controller.symbols.symbols_path(), os.path.basename(request.match_info["symbol_id"]))
|
||||||
with open(path, 'wb+') as f:
|
try:
|
||||||
while True:
|
with open(path, 'wb') as f:
|
||||||
packet = yield from request.content.read(512)
|
while True:
|
||||||
if not packet:
|
packet = yield from request.content.read(512)
|
||||||
break
|
if not packet:
|
||||||
f.write(packet)
|
break
|
||||||
|
f.write(packet)
|
||||||
|
except OSError as e:
|
||||||
|
raise aiohttp.web.HTTPConflict(text="Could not write symbol file '{}': {}".format(path, e))
|
||||||
# Reset the symbol list
|
# Reset the symbol list
|
||||||
controller.symbols.list()
|
controller.symbols.list()
|
||||||
response.set_status(204)
|
response.set_status(204)
|
||||||
|
@ -17,7 +17,8 @@
|
|||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
import json
|
import json
|
||||||
import psutil
|
|
||||||
|
from gns3server.utils.ping_stats import PingStats
|
||||||
|
|
||||||
|
|
||||||
class NotificationQueue(asyncio.Queue):
|
class NotificationQueue(asyncio.Queue):
|
||||||
@ -38,24 +39,14 @@ class NotificationQueue(asyncio.Queue):
|
|||||||
# At first get we return a ping so the client immediately receives data
|
# At first get we return a ping so the client immediately receives data
|
||||||
if self._first:
|
if self._first:
|
||||||
self._first = False
|
self._first = False
|
||||||
return ("ping", self._getPing(), {})
|
return ("ping", PingStats.get(), {})
|
||||||
|
|
||||||
try:
|
try:
|
||||||
(action, msg, kwargs) = yield from asyncio.wait_for(super().get(), timeout)
|
(action, msg, kwargs) = yield from asyncio.wait_for(super().get(), timeout)
|
||||||
except asyncio.futures.TimeoutError:
|
except asyncio.futures.TimeoutError:
|
||||||
return ("ping", self._getPing(), {})
|
return ("ping", PingStats.get(), {})
|
||||||
return (action, msg, kwargs)
|
return (action, msg, kwargs)
|
||||||
|
|
||||||
def _getPing(self):
|
|
||||||
"""
|
|
||||||
Return the content of the ping notification
|
|
||||||
"""
|
|
||||||
msg = {}
|
|
||||||
# Non blocking call in order to get cpu usage. First call will return 0
|
|
||||||
msg["cpu_usage_percent"] = psutil.cpu_percent(interval=None)
|
|
||||||
msg["memory_usage_percent"] = psutil.virtual_memory().percent
|
|
||||||
return msg
|
|
||||||
|
|
||||||
@asyncio.coroutine
|
@asyncio.coroutine
|
||||||
def get_json(self, timeout):
|
def get_json(self, timeout):
|
||||||
"""
|
"""
|
||||||
|
@ -160,7 +160,7 @@ def pid_lock(path):
|
|||||||
with open(path) as f:
|
with open(path) as f:
|
||||||
try:
|
try:
|
||||||
pid = int(f.read())
|
pid = int(f.read())
|
||||||
os.kill(pid, 0) # If the proces is not running kill return an error
|
os.kill(pid, 0) # kill returns an error if the process is not running
|
||||||
except (OSError, SystemError, ValueError):
|
except (OSError, SystemError, ValueError):
|
||||||
pid = None
|
pid = None
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
|
@ -138,8 +138,14 @@ class Hypervisor(UBridgeHypervisor):
|
|||||||
match = re.search("ubridge version ([0-9a-z\.]+)", output)
|
match = re.search("ubridge version ([0-9a-z\.]+)", output)
|
||||||
if match:
|
if match:
|
||||||
self._version = match.group(1)
|
self._version = match.group(1)
|
||||||
if parse_version(self._version) < parse_version("0.9.12"):
|
if sys.platform.startswith("win") or sys.platform.startswith("darwin"):
|
||||||
raise UbridgeError("uBridge executable version must be >= 0.9.12")
|
minimum_required_version = "0.9.12"
|
||||||
|
else:
|
||||||
|
# uBridge version 0.9.14 is required for packet filters
|
||||||
|
# to work for IOU nodes.
|
||||||
|
minimum_required_version = "0.9.14"
|
||||||
|
if parse_version(self._version) < parse_version(minimum_required_version):
|
||||||
|
raise UbridgeError("uBridge executable version must be >= {}".format(minimum_required_version))
|
||||||
else:
|
else:
|
||||||
raise UbridgeError("Could not determine uBridge version for {}".format(self._path))
|
raise UbridgeError("Could not determine uBridge version for {}".format(self._path))
|
||||||
except (OSError, subprocess.SubprocessError) as e:
|
except (OSError, subprocess.SubprocessError) as e:
|
||||||
|
@ -214,6 +214,8 @@ class UBridgeHypervisor:
|
|||||||
# Now retrieve the result
|
# Now retrieve the result
|
||||||
data = []
|
data = []
|
||||||
buf = ''
|
buf = ''
|
||||||
|
retries = 0
|
||||||
|
max_retries = 10
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
try:
|
try:
|
||||||
@ -222,9 +224,21 @@ class UBridgeHypervisor:
|
|||||||
# task has been canceled but continue to read
|
# task has been canceled but continue to read
|
||||||
# any remaining data sent by the hypervisor
|
# any remaining data sent by the hypervisor
|
||||||
continue
|
continue
|
||||||
|
except ConnectionResetError as e:
|
||||||
|
# Sometimes WinError 64 (ERROR_NETNAME_DELETED) is returned here on Windows.
|
||||||
|
# These happen if connection reset is received before IOCP could complete
|
||||||
|
# a previous operation. Ignore and try again....
|
||||||
|
log.warning("Connection reset received while reading uBridge response: {}".format(e))
|
||||||
|
continue
|
||||||
if not chunk:
|
if not chunk:
|
||||||
raise UbridgeError("No data returned from {host}:{port}, uBridge process running: {run}"
|
if retries > max_retries:
|
||||||
.format(host=self._host, port=self._port, run=self.is_running()))
|
raise UbridgeError("No data returned from {host}:{port}, uBridge process running: {run}"
|
||||||
|
.format(host=self._host, port=self._port, run=self.is_running()))
|
||||||
|
else:
|
||||||
|
retries += 1
|
||||||
|
yield from asyncio.sleep(0.1)
|
||||||
|
continue
|
||||||
|
retries = 0
|
||||||
buf += chunk.decode("utf-8")
|
buf += chunk.decode("utf-8")
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
raise UbridgeError("Lost communication with {host}:{port} :{error}, uBridge process running: {run}"
|
raise UbridgeError("Lost communication with {host}:{port} :{error}, uBridge process running: {run}"
|
||||||
|
50
gns3server/utils/ping_stats.py
Normal file
50
gns3server/utils/ping_stats.py
Normal file
@ -0,0 +1,50 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
#
|
||||||
|
# Copyright (C) 2018 GNS3 Technologies Inc.
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import psutil
|
||||||
|
import time
|
||||||
|
|
||||||
|
|
||||||
|
class PingStats:
|
||||||
|
"""
|
||||||
|
Ping messages are regularly sent to the client to keep the connection open.
|
||||||
|
We send with it some information about server load.
|
||||||
|
"""
|
||||||
|
|
||||||
|
_last_measurement = 0.0 # time of last measurement
|
||||||
|
_last_cpu_percent = 0.0 # last cpu_percent
|
||||||
|
_last_mem_percent = 0.0 # last virtual_memory().percent
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def get(cls):
|
||||||
|
"""
|
||||||
|
Get ping statistics
|
||||||
|
|
||||||
|
:returns: hash
|
||||||
|
"""
|
||||||
|
stats = {}
|
||||||
|
cur_time = time.time()
|
||||||
|
# minimum interval for getting CPU and memory statistics
|
||||||
|
if cur_time < cls._last_measurement or \
|
||||||
|
cur_time > cls._last_measurement + 1.9:
|
||||||
|
cls._last_measurement = cur_time
|
||||||
|
# Non blocking call to get cpu usage. First call will return 0
|
||||||
|
cls._last_cpu_percent = psutil.cpu_percent(interval=None)
|
||||||
|
cls._last_mem_percent = psutil.virtual_memory().percent
|
||||||
|
stats["cpu_usage_percent"] = cls._last_cpu_percent
|
||||||
|
stats["memory_usage_percent"] = cls._last_mem_percent
|
||||||
|
return stats
|
@ -310,7 +310,7 @@ apt-get install -y \
|
|||||||
dnsutils \
|
dnsutils \
|
||||||
nginx-light
|
nginx-light
|
||||||
|
|
||||||
MY_IP_ADDR=$(dig @ns1.google.com -t txt o-o.myaddr.l.google.com +short | sed 's/"//g')
|
MY_IP_ADDR=$(dig @ns1.google.com -t txt o-o.myaddr.l.google.com +short -4 | sed 's/"//g')
|
||||||
|
|
||||||
log "IP detected: $MY_IP_ADDR"
|
log "IP detected: $MY_IP_ADDR"
|
||||||
|
|
||||||
|
@ -19,7 +19,6 @@
|
|||||||
import pytest
|
import pytest
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
import uuid
|
import uuid
|
||||||
import os
|
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
pytestmark = pytest.mark.skipif(sys.platform.startswith("win"), reason="Not supported on Windows")
|
pytestmark = pytest.mark.skipif(sys.platform.startswith("win"), reason="Not supported on Windows")
|
||||||
@ -40,18 +39,17 @@ def iou(port_manager):
|
|||||||
return iou
|
return iou
|
||||||
|
|
||||||
|
|
||||||
def test_get_application_id(loop, project, iou):
|
def test_application_id(loop, project, iou):
|
||||||
vm1_id = str(uuid.uuid4())
|
vm1_id = str(uuid.uuid4())
|
||||||
vm2_id = str(uuid.uuid4())
|
vm2_id = str(uuid.uuid4())
|
||||||
vm3_id = str(uuid.uuid4())
|
vm3_id = str(uuid.uuid4())
|
||||||
loop.run_until_complete(iou.create_node("PC 1", project.id, vm1_id))
|
vm1 = loop.run_until_complete(iou.create_node("PC 1", project.id, vm1_id))
|
||||||
loop.run_until_complete(iou.create_node("PC 2", project.id, vm2_id))
|
vm2 = loop.run_until_complete(iou.create_node("PC 2", project.id, vm2_id))
|
||||||
assert iou.get_application_id(vm1_id) == 1
|
assert vm1.application_id == 1
|
||||||
assert iou.get_application_id(vm1_id) == 1
|
assert vm2.application_id == 2
|
||||||
assert iou.get_application_id(vm2_id) == 2
|
|
||||||
loop.run_until_complete(iou.delete_node(vm1_id))
|
loop.run_until_complete(iou.delete_node(vm1_id))
|
||||||
loop.run_until_complete(iou.create_node("PC 3", project.id, vm3_id))
|
vm3 = loop.run_until_complete(iou.create_node("PC 3", project.id, vm3_id))
|
||||||
assert iou.get_application_id(vm3_id) == 1
|
assert vm3.application_id == 1
|
||||||
|
|
||||||
|
|
||||||
def test_get_application_id_multiple_project(loop, iou):
|
def test_get_application_id_multiple_project(loop, iou):
|
||||||
@ -60,20 +58,20 @@ def test_get_application_id_multiple_project(loop, iou):
|
|||||||
vm3_id = str(uuid.uuid4())
|
vm3_id = str(uuid.uuid4())
|
||||||
project1 = ProjectManager.instance().create_project(project_id=str(uuid.uuid4()))
|
project1 = ProjectManager.instance().create_project(project_id=str(uuid.uuid4()))
|
||||||
project2 = ProjectManager.instance().create_project(project_id=str(uuid.uuid4()))
|
project2 = ProjectManager.instance().create_project(project_id=str(uuid.uuid4()))
|
||||||
loop.run_until_complete(iou.create_node("PC 1", project1.id, vm1_id))
|
vm1 = loop.run_until_complete(iou.create_node("PC 1", project1.id, vm1_id))
|
||||||
loop.run_until_complete(iou.create_node("PC 2", project1.id, vm2_id))
|
vm2 = loop.run_until_complete(iou.create_node("PC 2", project1.id, vm2_id))
|
||||||
loop.run_until_complete(iou.create_node("PC 2", project2.id, vm3_id))
|
vm3 = loop.run_until_complete(iou.create_node("PC 2", project2.id, vm3_id))
|
||||||
assert iou.get_application_id(vm1_id) == 1
|
assert vm1.application_id == 1
|
||||||
assert iou.get_application_id(vm2_id) == 2
|
assert vm2.application_id == 2
|
||||||
assert iou.get_application_id(vm3_id) == 3
|
assert vm3.application_id == 3
|
||||||
|
|
||||||
|
|
||||||
def test_get_application_id_no_id_available(loop, project, iou):
|
def test_get_application_id_no_id_available(loop, project, iou):
|
||||||
with pytest.raises(IOUError):
|
with pytest.raises(IOUError):
|
||||||
for i in range(1, 513):
|
for i in range(1, 513):
|
||||||
node_id = str(uuid.uuid4())
|
node_id = str(uuid.uuid4())
|
||||||
loop.run_until_complete(iou.create_node("PC {}".format(i), project.id, node_id))
|
vm = loop.run_until_complete(iou.create_node("PC {}".format(i), project.id, node_id))
|
||||||
assert iou.get_application_id(node_id) == i
|
assert vm.application_id == i
|
||||||
|
|
||||||
|
|
||||||
def test_get_images_directory(iou, tmpdir):
|
def test_get_images_directory(iou, tmpdir):
|
||||||
|
@ -48,7 +48,7 @@ def manager(port_manager):
|
|||||||
|
|
||||||
@pytest.fixture(scope="function")
|
@pytest.fixture(scope="function")
|
||||||
def vm(project, manager, tmpdir, fake_iou_bin, iourc_file):
|
def vm(project, manager, tmpdir, fake_iou_bin, iourc_file):
|
||||||
vm = IOUVM("test", str(uuid.uuid4()), project, manager)
|
vm = IOUVM("test", str(uuid.uuid4()), project, manager, application_id=1)
|
||||||
config = manager.config.get_section_config("IOU")
|
config = manager.config.get_section_config("IOU")
|
||||||
config["iourc_path"] = iourc_file
|
config["iourc_path"] = iourc_file
|
||||||
manager.config.set_section_config("IOU", config)
|
manager.config.set_section_config("IOU", config)
|
||||||
@ -84,7 +84,7 @@ def test_vm(project, manager):
|
|||||||
|
|
||||||
|
|
||||||
def test_vm_startup_config_content(project, manager):
|
def test_vm_startup_config_content(project, manager):
|
||||||
vm = IOUVM("test", "00010203-0405-0607-0808-0a0b0c0d0e0f", project, manager)
|
vm = IOUVM("test", "00010203-0405-0607-0808-0a0b0c0d0e0f", project, manager, application_id=1)
|
||||||
vm.startup_config_content = "hostname %h"
|
vm.startup_config_content = "hostname %h"
|
||||||
assert vm.name == "test"
|
assert vm.name == "test"
|
||||||
assert vm.startup_config_content == "hostname test"
|
assert vm.startup_config_content == "hostname test"
|
||||||
@ -94,7 +94,6 @@ def test_vm_startup_config_content(project, manager):
|
|||||||
def test_start(loop, vm):
|
def test_start(loop, vm):
|
||||||
|
|
||||||
mock_process = MagicMock()
|
mock_process = MagicMock()
|
||||||
|
|
||||||
vm._check_requirements = AsyncioMagicMock(return_value=True)
|
vm._check_requirements = AsyncioMagicMock(return_value=True)
|
||||||
vm._check_iou_licence = AsyncioMagicMock(return_value=True)
|
vm._check_iou_licence = AsyncioMagicMock(return_value=True)
|
||||||
vm._start_ubridge = AsyncioMagicMock(return_value=True)
|
vm._start_ubridge = AsyncioMagicMock(return_value=True)
|
||||||
@ -440,7 +439,7 @@ def test_application_id(project, manager):
|
|||||||
"""
|
"""
|
||||||
Checks if uses local manager to get application_id when not set
|
Checks if uses local manager to get application_id when not set
|
||||||
"""
|
"""
|
||||||
vm = IOUVM("test", str(uuid.uuid4()), project, manager)
|
vm = IOUVM("test", str(uuid.uuid4()), project, manager, application_id=1)
|
||||||
assert vm.application_id == 1
|
assert vm.application_id == 1
|
||||||
|
|
||||||
vm.application_id = 3
|
vm.application_id = 3
|
||||||
|
@ -1,38 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Copyright (C) 2017 GNS3 Technologies Inc.
|
|
||||||
#
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
from unittest.mock import MagicMock
|
|
||||||
from gns3server.compute.iou.utils.application_id import get_next_application_id, IOUError
|
|
||||||
|
|
||||||
|
|
||||||
def test_get_next_application_id():
|
|
||||||
# test first node
|
|
||||||
assert get_next_application_id([]) == 1
|
|
||||||
|
|
||||||
# test second node
|
|
||||||
nodes = [
|
|
||||||
MagicMock(node_type='different'),
|
|
||||||
MagicMock(node_type='iou', properties=dict(application_id=1))
|
|
||||||
]
|
|
||||||
assert get_next_application_id(nodes) == 2
|
|
||||||
|
|
||||||
# test reach out the limit
|
|
||||||
nodes = [MagicMock(node_type='iou', properties=dict(application_id=i)) for i in range(1, 512)]
|
|
||||||
|
|
||||||
with pytest.raises(IOUError):
|
|
||||||
get_next_application_id(nodes)
|
|
@ -510,7 +510,9 @@ def test_load_appliances(controller):
|
|||||||
"Qemu": {
|
"Qemu": {
|
||||||
"vms": [
|
"vms": [
|
||||||
{
|
{
|
||||||
"name": "Test"
|
"name": "Test",
|
||||||
|
"node_type": "qemu",
|
||||||
|
"category": "router"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@ -538,6 +540,52 @@ def test_load_appliances(controller):
|
|||||||
assert cloud_uuid == appliance.id
|
assert cloud_uuid == appliance.id
|
||||||
|
|
||||||
|
|
||||||
|
def test_load_appliances_deprecated_features_default_symbol(controller):
|
||||||
|
controller._settings = {
|
||||||
|
"Qemu": {
|
||||||
|
"vms": [
|
||||||
|
{
|
||||||
|
"name": "Test",
|
||||||
|
"node_type": "qemu",
|
||||||
|
"category": "router",
|
||||||
|
"default_symbol": ":/symbols/iosv_virl.normal.svg",
|
||||||
|
"hover_symbol": ":/symbols/iosv_virl.selected.svg",
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
controller.load_appliances()
|
||||||
|
appliances = dict([(a.name, a) for a in controller.appliances.values()])
|
||||||
|
|
||||||
|
assert appliances["Test"].__json__()["symbol"] == ":/symbols/computer.svg"
|
||||||
|
assert "default_symbol" not in appliances["Test"].data.keys()
|
||||||
|
assert "hover_symbol" not in appliances["Test"].data.keys()
|
||||||
|
|
||||||
|
|
||||||
|
def test_load_appliances_deprecated_features_default_symbol_with_symbol(controller):
|
||||||
|
controller._settings = {
|
||||||
|
"Qemu": {
|
||||||
|
"vms": [
|
||||||
|
{
|
||||||
|
"name": "Test",
|
||||||
|
"node_type": "qemu",
|
||||||
|
"category": "router",
|
||||||
|
"default_symbol": ":/symbols/iosv_virl.normal.svg",
|
||||||
|
"hover_symbol": ":/symbols/iosv_virl.selected.svg",
|
||||||
|
"symbol": ":/symbols/my-symbol.svg"
|
||||||
|
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
controller.load_appliances()
|
||||||
|
appliances = dict([(a.name, a) for a in controller.appliances.values()])
|
||||||
|
|
||||||
|
assert appliances["Test"].__json__()["symbol"] == ":/symbols/my-symbol.svg"
|
||||||
|
assert "default_symbol" not in appliances["Test"].data.keys()
|
||||||
|
assert "hover_symbol" not in appliances["Test"].data.keys()
|
||||||
|
|
||||||
|
|
||||||
def test_autoidlepc(controller, async_run):
|
def test_autoidlepc(controller, async_run):
|
||||||
controller._computes["local"] = AsyncioMagicMock()
|
controller._computes["local"] = AsyncioMagicMock()
|
||||||
node_mock = AsyncioMagicMock()
|
node_mock = AsyncioMagicMock()
|
||||||
|
@ -517,8 +517,8 @@ def test_get_port(node):
|
|||||||
assert port.port_number == 0
|
assert port.port_number == 0
|
||||||
port = node.get_port(1, 0)
|
port = node.get_port(1, 0)
|
||||||
assert port.adapter_number == 1
|
assert port.adapter_number == 1
|
||||||
with pytest.raises(aiohttp.web.HTTPNotFound):
|
port = node.get_port(42, 0)
|
||||||
port = node.get_port(42, 0)
|
assert port is None
|
||||||
|
|
||||||
|
|
||||||
def test_parse_node_response(node, async_run):
|
def test_parse_node_response(node, async_run):
|
||||||
|
@ -646,27 +646,6 @@ def test_node_name(project, async_run):
|
|||||||
assert node.name == "R3"
|
assert node.name == "R3"
|
||||||
|
|
||||||
|
|
||||||
def test_add_iou_node_and_check_if_gets_application_id(project, async_run):
|
|
||||||
compute = MagicMock()
|
|
||||||
compute.id = "local"
|
|
||||||
response = MagicMock()
|
|
||||||
response.json = {"console": 2048}
|
|
||||||
compute.post = AsyncioMagicMock(return_value=response)
|
|
||||||
|
|
||||||
# tests if get_next_application_id is called
|
|
||||||
with patch('gns3server.controller.project.get_next_application_id', return_value=222) as mocked_get_app_id:
|
|
||||||
node = async_run(project.add_node(
|
|
||||||
compute, "test", None, node_type="iou", properties={"startup_config": "test.cfg"}))
|
|
||||||
assert mocked_get_app_id.called
|
|
||||||
assert node.properties['application_id'] == 222
|
|
||||||
|
|
||||||
# tests if we can send property and it will be used
|
|
||||||
node = async_run(project.add_node(
|
|
||||||
compute, "test", None, node_type="iou", application_id=333, properties={"startup_config": "test.cfg"}))
|
|
||||||
assert mocked_get_app_id.called
|
|
||||||
assert node.properties['application_id'] == 333
|
|
||||||
|
|
||||||
|
|
||||||
def test_duplicate_node(project, async_run):
|
def test_duplicate_node(project, async_run):
|
||||||
compute = MagicMock()
|
compute = MagicMock()
|
||||||
compute.id = "local"
|
compute.id = "local"
|
||||||
|
Loading…
Reference in New Issue
Block a user