Merge branch 'staging' into 'master'

Staging

MVP documentation has been updated to match with Liberty release of OpenStack.

This series of changes (files) update the documentation:

* index.rst
* openstack_installing_bundles.rst
* openstack_environment-database.rst
* openstack_environment-messaging.rst
* openstack_identity.rst
* openstack_identity-openrc.rst
* openstack_image.rst
* openstack_compute.rst

**Please do review and merge once Issue #1 is closed by Obed N Munoz.**

See merge request !5
This commit is contained in:
Tullis, Michael L
2015-11-02 11:50:25 -08:00
9 changed files with 1683 additions and 1225 deletions

View File

@@ -1,58 +1,71 @@
Creating a bootable USB stick to install the OS
############################################################
.. _bootable_usb:
Creating a bootable USB to install the OS
=========================================
Here's how to create a USB drive that initiates the process for
:ref:`clr_as_host`. Alternatively, you can test the OS by :ref:`clr_in_virtual_env`.
This article explains how to create a bootable USB stick to `install Clear Linux* OS for Intel® Architecture as host <gs_installing_clr_as_host.html>`_ on a target computer. You can also `run the OS in a virtualized environment <gs_running_clr_virtual.html>`_.
What you need
-------------------
To create a bootable USB stick to install the OS, you need the following:
-------------
* A USB stick, formatted as ``ext4``. Remember that the process of flashing
data to a USB completely deletes the contents of the drive; as always, run
``dd`` with caution.
* A ClearLinux OS image; the most current version can be found here:
`https://download.clearlinux.org/image <https://download.clearlinux.org/image>`_
.. tip::
* A USB stick, formatted as ext4. Remember that the process of flashing to a USB completely deletes the contents of the drive; as always, run ``dd`` with caution.
* An OS image; the most current version can be found here: `https://download.clearlinux.org/image <https://download.clearlinux.org/image>`_. For older versions, see our `downloads page <http://download.clearlinux.org/>`_.
For older versions, see our `downloads page <https://download.clearlinux.org/>`_.
Grab the installer image
------------------------
.. code:: text
$ wget https://download.clearlinux.org/image/clear-[release_number]-installer.img.xz
Download and checksum
---------------------
Check the SHA512
----------------
.. code:: text
::
$ wget https://download.clearlinux.org/image/clear-[release_number]-installer.img.xz
$ sha512sum clear-[release_number]-installer.img.xz`
$ sha512sum clear-[release_number]-installer.img.xz
The above command outputs the checksum of the image you just downloaded; it will, of course, vary depending on the version you're testing.
Confirm the mount point on the USB drive
----------------------------------------
Using the ``lsblk`` command is helpful to show the block-level devices; a USB drive usually shows up under ``/sdb`` or ``/sdc`` (not ``/sda``) and should indicate disk space < the size of the USB drive.
Using :command:`$ lsblk` is helpful to show the block-level devices; a USB drive
usually shows up under ``/sdb`` or ``/sdc`` (almost never ``/sda``), and should
indicate disk space approximately the size of the USB drive:
.. code:: text
::
$ lsblk /dev/sdb NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 1 14.9G 0 disk ?? sdb1 8:17 1 14.9G 0 part
$ lsblk /dev/sdb
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
?? sdb 8:16 1 14.9G 0
?? sdb1 8:17 1 14.9G 0 part
and make sure the drive isn't already mounted. The easiest way is with
:command:`# df`.
Make sure that the USB is not mounted
-------------------------------------
The easiest way is with ``# df``.
Flash the image to the USB
--------------------------
Flash the image with the following command, adding the -v option for verbose mode (recommended), as the image file may be large, and the process can take a while. This will likely need to be done as root.
Flash the image with the following command, adding the ``-v`` option for verbose mode
(recommended), as the image file may be large, and the process can take a while. This
may need to be done as root.
::
$ xzcat -v clear-[release_number]-installer.img.xz | dd of=/dev/sdb bs=4M
.. code:: text
# xzcat -v clear-[release_number]-installer.img.xz | dd of=/dev/sdb bs=4M
Wait for the final confirmation
-------------------------------
This example shows ``clear-2190-installer.img.xz`` flashed to a 16GB USB stick mounted on ``/sdc``.
This example shows ``clear-2190-installer.img.xz`` flashed to a 16GB USB drive
mounted on ``/sdc``.
.. image:: images\gs_confirmation_screen.png
:align: center
:alt: confirmation
.. image:: images/gs_confirmation_screen.png
:align: center
:alt: confirmation
Success!
++++++++
Your USB stick is now ready to `boot and install the Clear Linux OS for Intel Architecture <gs_installing_clr_as_host.html>`_.
Success! Your USB stick is now ready to boot and initiate the process for
:ref:`clr_as_host`.

View File

@@ -46,6 +46,12 @@ OpenStack* implementation
openstack_bundle_and_service_summary
openstack_sys_req_and_pw_summary
openstack_installing_bundles
openstack_environment-database
openstack_environment-messaging
openstack_identity
openstack_identity-openrc
openstack_image
openstack_compute
openstack_block_storage
openstack_dashboard
openstack_networking

View File

@@ -0,0 +1,589 @@
OpenStack Compute
#################
Use OpenStack Compute to host and manage cloud computing systems.
OpenStack Compute interacts with OpenStack Identity for authentication,
OpenStack Image Service for disk and server images, and OpenStack
dashboard for the user and administrative interface. Image access is
limited by projects, and by users; quotas are limited per project (the
number of instances, for example). OpenStack Compute can scale
horizontally on standard hardware, and download images to launch
instances.
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the
Compute service, code-named nova, on the controller node.
Prerequisites
-------------
Before you install and configure the Compute service, you must
create a database, service credentials, and API endpoints.
#. To create the database, complete these steps:
#. Use the database access client to connect to the database server
as the root user:
.. code-block:: console
$ mysql -u root -p
#. Create the ``nova`` database:
.. code-block:: console
CREATE DATABASE nova;
#. Grant proper access to the nova database. Replace ``NOVA_DBPASS``
with a suitable password.
.. code-block:: console
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
#. Exit the database access client.
#. Source the admin credentials to gain access to admin-only CLI
commands:
.. code-block:: console
$ source admin-openrc.sh
#. To create the service credentials, complete these steps:
* Create the ``nova`` user. Replace ``NOVA_PASS`` with a suitable
password.
.. code-block:: console
$ openstack user create --domain default --password NOVA_PASS nova
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8c46e4760902464b889293a74a0c90a8 |
| name | nova |
+-----------+----------------------------------+
* Add the ``admin`` role to the ``nova`` user:
.. code-block:: console
$ openstack role add --project service --user nova admin
#. Create the ``nova`` service entity:
.. code-block:: console
$ openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 060d59eac51b4594815603d75a00aba2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
#. Create the Compute service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 3c1caa473bfe4390a11e7177894bcc7b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | e702f6f497ed42e6a8ae3ba2e5871c78 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
$ openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | e3c918de680746a586eac1f2d9bc10ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | e702f6f497ed42e6a8ae3ba2e5871c78 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
$ openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 38f7af91666a47cfb97b4dc790b94424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | e702f6f497ed42e6a8ae3ba2e5871c78 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
Installing and configuring the Compute controller components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To install and configure the Compute controller components:
#. Install OpenStack Compute Controller bundle:
.. code-block:: console
# clr_bundle_add openstack-compute-controller
#. Custom configurations will be located at ``/etc/nova``.
* Create ``/etc/nova directory``.
.. code-block:: console
# mkdir /etc/nova
* Create empty nova configuration file ``/etc/nova/nova.conf``.
.. code-block:: console
# touch /etc/nova/nova.conf
#. Edit the ``/etc/nova/nova.conf`` file and complete the following
actions:
* In the ``[database]`` section, configure database access. Replace
``NOVA_DBPASS`` with the password you chose for the Compute
database.
.. code-block:: ini
[database]
...
connection=mysql://nova:NOVA_DBPASS@controller/nova
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure ``RabbitMQ`` message queue access. Replace ``RABBIT_PASS``
with the password you chose for the guest account in RabbitMQ.
.. code-block:: ini
[DEFAULT]
...
rpc_backend = rabbit
.. code-block:: console
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access. Replace ``NOVA_PASS`` with the
password you chose for the nova user in the Identity service.
.. code-block:: ini
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = NOVA_PASS
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
use the management interface IP address of the controller node:
.. code-block:: ini
[DEFAULT]
...
my_ip = 10.0.0.11
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. code-block:: ini
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
* In the ``[vnc]`` section, configure the VNC proxy to use the
management interface IP address of the controller node:
.. code-block:: ini
[vnc]
...
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11
* In the ``[glance]`` section, configure the location of the
Image Service:
.. code-block:: ini
[glance]
...
host = controller
#. Let systemd set the correct permissions for files in ``/etc/nova``.
.. code-block:: console
# systemctl restart update-triggers.target
#. Populate the Compute database:
.. code-block:: console
su -s /bin/sh -c "nova-manage db sync" nova
Finalizing Compute installation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Complete the following steps to finalize Compute installation:
#. Start the Compute Service services and configure them to start
when the system boots:
.. code-block:: console
# systemctl enable uwsgi@nova-api.socket \
nova-cert.service nova-consoleauth.service \
nova-scheduler.service nova-conductor.service \
nova-novncproxy.service
# systemctl start uwsgi@nova-api.socket \
nova-cert.service nova-consoleauth.service \
nova-scheduler.service nova-conductor.service \
nova-novncproxy.service
Install and configure a compute note
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute service
on a compute node. This configuration uses the QEMU hypervisor with the
KVM extension on compute nodes that support hardware acceleration for
virtual machines.
Install and configure components
--------------------------------
#. Install OpenStack Compute bundle:
.. code-block:: console
# clr_bundle_add openstack-compute
#. Custom configurations will be located at ``/etc/nova``.
* Create ``/etc/nova`` directory.
.. code-block:: console
# mkdir /etc/nova
* Create empty nova configuration file ``/etc/nova/nova.conf``.
.. code-block:: console
# touch /etc/nova/nova.conf
#. Edit the ``/etc/nova/nova.conf`` file and complete the following
actions:
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
configure RabbitMQ message broker access. Replace ``RABBIT_PASS``
with the password you chose for the ``openstack`` account in ``RabbitMQ``.
.. code-block:: ini
[DEFAULT]
...
rpc_backend = rabbit
.. code-block:: ini
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access. Replace ``NOVA_PASS`` with the
password you chose for the nova user in the Identity service.
.. code-block:: console
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = NOVA_PASS
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option.
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of
the management network interface on your compute node, typically
``10.0.0.31`` for the first node in the example architecture.
.. code-block:: ini
[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. code-block:: ini
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
* In the ``[vnc]`` section, enable and configure remote console access:
.. code-block:: ini
[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = MANAGEMENT_INTERFACE_IP_ADDRESS
novncproxy_base_url = http://controller:6080/vnc_auto.html
The server component listens on all IP addresses and the proxy
component only listens on the management interface IP address of
the compute node. The base URL indicates the location where you
can use a web browser to access remote consoles of instances on
this compute node.
* In the ``[glance]`` section, configure the location of the
Image Service:
.. code-block:: ini
[glance]
...
host = controller
Finalize compute node installation
----------------------------------
#. Determine whether your compute node supports hardware acceleration
for virtual machines:
.. code-block:: console
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of ``one or greater``, your compute
node supports hardware acceleration which typically requires no
additional configuration.
If this command returns a value of ``zero`` , your compute node does
not support hardware acceleration and you must configure ``libvirt``
to use QEMU instead of KVM.
* Edit the ``[libvirt]`` section in the ``/etc/nova/nova.conf`` file
as follows:
.. code-block:: ini
[libvirt]
...
virt_type = qemu
#. Start the Compute service including its dependencies and configure
them to start automatically when the system boots:
.. code-block:: console
# systemctl enable libvirtd.service \
nova-compute.service
# systemctl start libvirtd.service \
nova-compute.service
Verify operation
~~~~~~~~~~~~~~~~
Verify operation of the Compute service.
.. note::
Perform these commands on the controller node.
#. Source the ``admin`` credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ source admin-openrc.sh
#. List service components to verify successful launch and
registration of each process:
.. code-block:: console
$ nova service-list
+----+------------------+------------+----------+---------+-------+--------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+---------+-------+--------------+-----------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2014-09-16.. | - |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2014-09-16.. | - |
| 3 | nova-scheduler | controller | internal | enabled | up | 2014-09-16.. | - |
| 4 | nova-cert | controller | internal | enabled | up | 2014-09-16.. | - |
| 5 | nova-compute | compute1 | nova | enabled | up | 2014-09-16.. | - |
+----+------------------+------------+----------+---------+-------+--------------+-----------------+
#. List API endpoints in the Identity service to verify connectivity
with the Identity service:
.. code-block:: console
$ nova endpoints
+-----------+------------------------------------------------------------+
| nova | Value |
+-----------+------------------------------------------------------------+
| id | 1fb997666b79463fb68db4ccfe4e6a71 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:8774/v2/ae7a98326b9c455588edd2656d723b9d |
+-----------+------------------------------------------------------------+
+-----------+------------------------------------------------------------+
| nova | Value |
+-----------+------------------------------------------------------------+
| id | bac365db1ff34f08a31d4ae98b056924 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:8774/v2/ae7a98326b9c455588edd2656d723b9d |
+-----------+------------------------------------------------------------+
+-----------+------------------------------------------------------------+
| nova | Value |
+-----------+------------------------------------------------------------+
| id | e37186d38b8e4b81a54de34e73b43f34 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:8774/v2/ae7a98326b9c455588edd2656d723b9d |
+-----------+------------------------------------------------------------+
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | 41ad39f6c6444b7d8fd8318c18ae0043 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:9292 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | 50ecc4ce62724e319f4fae3861e50f7d |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:9292 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | 7d3df077a20b4461a372269f603b7516 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:9292 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | 88150c2fdc9d406c9b25113701248192 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:5000/v2.0 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | cecab58c0f024d95b36a4ffa3e8d81e1 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:5000/v2.0 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | fc90391ae7cd4216aca070042654e424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://controller:35357/v2.0 |
+-----------+----------------------------------+
.. note::
Ignore any warnings in this output.
#. List images in the Image service catalog to verify connectivity
with the Image service:
.. code-block:: console
$ nova image-list
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+

View File

@@ -0,0 +1,56 @@
Database
~~~~~~~~
Most OpenStack services use an SQL database to store information. The
database typically runs on the controller node. The procedures in this
guide use MariaDB.
Install and configure the database server
-----------------------------------------
#. Install MariaDB bundle:
.. code-block:: console
# clr_bundle_add database-mariadb
#. Create the ``/etc/mariadb/`` folder and the ``/etc/mariadb/openstack.cnf`` file.
.. code-block:: console
# mkdir /etc/mariadb
# touch /etc/mariadb/openstack.cnf
#. Add the ``[mysqld]`` section, set the bind-address key to the
management IP address of the controller node to enable access by
other nodes via the management network and enable useful options for
UTF-8 character set:
.. code:: console
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
Finalizing database installation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Complete the following steps to finalize database installation:
#. Start the database service and configure it to start when the system
boots:
.. code:: console
# systemctl enable mariadb.service
# systemctl start mariadb.service
2. Secure the database service including choosing a suitable password
for the root account:
.. code:: console
# mysql_secure_installation

View File

@@ -0,0 +1,54 @@
Message queue
~~~~~~~~~~~~~
OpenStack uses a `message queue` to coordinate operations and
status information among services. The message queue service typically
runs on the controller node. OpenStack supports several message queue
services. This guide implements the RabbitMQ message queue service.
Install the message queue service
---------------------------------
#. Install the message queue bundle:
.. code-block:: console
# clr_bundle_add message-broker-rabbitmq
Configuring the message broker service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Complete the following steps to configure the message broker service:
#. Message broker service needs to be able to resolve to itself. Add the
following line to ``/etc/hosts``
.. code:: console
127.0.0.1 controller
#. Start the message broker service and configure it to start when the
system boots:
.. code:: console
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
#. Add the OpenStack user:
.. code:: console
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user openstack ...
...done.
Replace ``RABBIT_PASS`` with a suitable password.
#. Permit configuration, write, and read access for the OpenStack user:
.. code:: console
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
...done.

View File

@@ -0,0 +1,80 @@
Create OpenStack client environment scripts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The previous section used a combination of environment variables and
command options to interact with the Identity service via the
``openstack`` client. To increase efficiency of client operations,
OpenStack supports simple client environment scripts also known as
OpenRC files. These scripts typically contain common options for
all clients, but also support unique options. For more information, see the
`OpenStack User Guide <http://docs.openstack.org/user-guide/common/
cli_set_environment_variables_using_openstack_rc.html>`__.
Creating the scripts
--------------------
Create client environment scripts for the ``admin`` and ``demo``
projects and users. Future portions of this guide reference these
scripts to load appropriate credentials for client operations.
#. Edit the ``admin-openrc.sh`` file and add the following content:
.. code-block:: bash
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
Replace ``ADMIN_PASS`` with the password you chose
for the ``admin`` user in the Identity service.
#. Edit the ``demo-openrc.sh`` file and add the following content:
.. code-block:: bash
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
Replace ``DEMO_PASS`` with the password you chose
for the ``demo`` user in the Identity service.
Using the scripts
-----------------
To run clients as a specific project and user, you can simply load
the associated client environment script prior to running them.
For example:
#. Load the ``admin-openrc.sh`` file to populate
environment variables with the location of the Identity service
and the ``admin`` project and user credentials:
.. code-block:: console
$ source admin-openrc.sh
#. Request an authentication token:
.. code-block:: console
$ openstack token issue
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2015-03-25T01:45:49.950092Z |
| id | cd4110152ac24bdeaa82e1443c910c36 |
| project_id | cf12a15c5ea84b019aec3dc45580896b |
| user_id | 4d411f2291f34941b30eef9bd797505a |
+------------+----------------------------------+

View File

@@ -0,0 +1,498 @@
OpenStack Identity
############################################################
The OpenStack Identity service provides a single point of
integration for managing authentication, authorization, and service catalog
services. Other OpenStack services use the Identity service as a common
unified API. Additionally, services that provide information about users
but that are not included in OpenStack (such as LDAP services) can be
integrated into a pre-existing infrastructure.
In order to benefit from the Identity service, other OpenStack services need to
collaborate with it. When an OpenStack service receives a request from a user,
it checks with the Identity service whether the user is authorized to make the
request.
The Identity service contains these components:
**Server**
A centralized server provides authentication and authorization
services using a RESTful interface.
**Drivers**
Drivers or a service back end are integrated to the centralized
server. They are used for accessing identity information in
repositories external to OpenStack, and may already exist in
the infrastructure where OpenStack is deployed (for example, SQL
databases or LDAP servers).
**Modules**
Middleware modules run in the address space of the OpenStack
component that is using the Identity service. These modules
intercept service requests, extract user credentials, and send them
to the centralized server for authorization. The integration between
the middleware modules and OpenStack components uses the Python Web
Server Gateway Interface.
When installing OpenStack Identity service, you must register each
service in your OpenStack installation. Identity service can then track
which OpenStack services are installed, and where they are located on
the network.
Install and configure
~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the OpenStack
Identity service, code-named keystone, on the controller node. For
performance, this configuration deploys the Nginx HTTP server to handle
requests.
Prerequisites
-------------
Before you configure the OpenStack Identity service, you must create a
database and an administration token.
#. To create the database, complete the following actions:
* Use the database access client to connect to the database server as the
``root`` user:
.. code-block:: console
$ mysql -u root -p
* Create the ``keystone`` database:
.. code-block:: console
CREATE DATABASE keystone;
* Grant proper access to the ``keystone`` database:
.. code-block:: console
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
Replace ``KEYSTONE_DBPASS`` with a suitable password.
* Exit the database access client.
#. Generate a random value to use as the administration token during
initial configuration:
.. code-block:: console
$ openssl rand -hex 10
Install and configure components
--------------------------------
#. Run the following command to install the packages:
.. code-block:: console
# clr_bundle_add openstack-identity
#. Custom configurations will be located at /etc/keystone/
* Create the /etc/keystone directory:
.. code-block:: console
# mkdir /etc/keystone
* Create empty keystone configuration file /etc/keystone/keystone.conf:
.. code-block:: console
# touch /etc/keystone/keystone.conf
#. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, define the value of the initial
administration token:
.. code-block:: ini
[DEFAULT]
...
admin_token = ADMIN_TOKEN
Replace ``ADMIN_TOKEN`` with the random value that you generated in a
previous step.
* In the ``[database]`` section, configure database access:
.. code-block:: ini
[database]
...
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
#. Enter the following command:
.. code:: console
# systemctl restart update-triggers.target
#. Populate the Identity service database:
.. code-block:: console
# su -s /bin/sh -c "keystone-manage db_sync" keystone
Finalize the installation
-------------------------
#. Keystone is deployed as a uwsgi module. To start the Identity
service, you should enable and start the nginx service
.. code-block:: console
# systemctl enable nginx uwsgi@keystone-admin.socket \
uwsgi@keystone-public.socket
# systemctl start nginx uwsgi@keystone-admin.socket \
uwsgi@keystone-public.socket
Create the service entity and API endpoints
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Identity service provides a catalog of services and their locations.
Each service that you add to your OpenStack environment requires a
service entity and several API endpoints in the catalog.
Prerequisites
-------------
You must pass the value of the authentication token to the `openstack`
command with the ``--os-token`` parameter or set the OS_TOKEN
environment variable. Similarly, you must also pass the value of the
Identity service URL to the `openstack` command with the ``--os-url``
parameter or set the OS_URL environment variable. This guide uses
environment variables to reduce command length.
#. Configure the authentication token:
.. code-block:: console
$ export OS_TOKEN=ADMIN_TOKEN
Replace ``ADMIN_TOKEN`` with the authentication token that you
generated before. For example:
.. code-block:: console
$ export OS_TOKEN=294a4c8a8a475f9b9836
#. Configure the endpoint:
.. code:: text
$ export OS_URL=http://controller:35357/v3
#. Configure the Identity API version:
.. code-block:: console
$ export OS_IDENTITY_API_VERSION=3
#. Install the OpenStack Python clients bundle:
.. code-block:: console
# clr_bundle_add openstack-python-clients
Create the service entity and API endpoints
-------------------------------------------
#. The Identity service manages a catalog of services in your OpenStack
environment. Services use this catalog to determine the other services
available in your environment.
Create the service entity for the Identity service:
.. code-block:: console
$ openstack service create \
--name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 4ddaae90388b4ebc9d252ec2252d8d10 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
#. The Identity service manages a catalog of API endpoints associated with
the services in your OpenStack environment. Services use this catalog to
determine how to communicate with other services in your environment.
OpenStack uses three API endpoint variants for each service: admin,
internal, and public. The admin API endpoint allows modifying users and
tenants by default, while the public and internal APIs do not allow these
operations. In a production environment, the variants might reside on
separate networks that service different types of users for security
reasons. For instance, the public API network might be visible from the
Internet so customers can manage their clouds. The admin API network
might be restricted to operators within the organization that manages
cloud infrastructure. The internal API network might be restricted to
the hosts that contain OpenStack services. Also, OpenStack supports
multiple regions for scalability. For simplicity, this guide uses the
management network for all endpoint variations and the default
``RegionOne`` region.
Create the Identity service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
identity public http://controller:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 30fff543e7dc4b7d9a0fb13791b78bf4 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c8c0927262a45ad9066cfe70d46892c |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
identity internal http://controller:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 57cfa543e7dc4b712c0ab137911bc4fe |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 6f8de927262ac12f6066cfe70d99ac51 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
identity admin http://controller:35357/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 78c3dfa3e7dc44c98ab1b1379122ecb1 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 34ab3d27262ac449cba6cfe704dbc11f |
| service_name | keystone |
| service_type | identity |
| url | http://controller:35357/v3 |
+--------------+----------------------------------+
Creating projects, users and roles
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Complete the following steps to create projects, users and roles:
#. Create an administrative project, user, and role for administrative
operations in your environment:
* Create the ``admin`` project:
.. code-block:: console
$ openstack project create --domain default \
--description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| domain_id | default |
| enabled | True |
| id | 343d245e850143a096806dfaefa9afdc |
| is_domain | False |
| name | admin |
| parent_id | None |
+-------------+----------------------------------+
* Create the ``admin`` user. Replace ``ADMIN_PASS`` with a suitable
password and ``EMAIL_ADDRESS`` with a suitable e-mail address:
.. code-block:: console
$ openstack user create --domain default \
--password ADMIN_PASS --email EMAIL_ADDRESS admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| email | admin@example.com |
| enabled | True |
| id | ac3377633149401296f6c0d92d79dc16 |
| name | admin |
+-----------+----------------------------------+
* Create the ``admin`` role:
.. code-block:: console
$ openstack role create admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | cd2cb9a39e874ea69e5d4b896eb16128 |
| name | admin |
+-------+----------------------------------+
* Add the ``admin`` role to the ``admin`` project and user:
.. code-block:: console
$ openstack role add --project admin --user admin admin
#. This guide uses a service project that contains a unique user for each
service that you add to your environment. Create the ``service``
project:
.. code-block:: console
$ openstack project create --domain default \
--description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 894cdfa366d34e9d835d3de01e752262 |
| is_domain | False |
| name | service |
| parent_id | None |
+-------------+----------------------------------+
#. Regular (non-admin) tasks should use an unprivileged project and user.
As an example, this guide creates the ``demo`` project and user.
* Create the ``demo`` project:
.. code-block:: console
$ openstack project create --domain default \
--description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | ed0b60bf607743088218b0a533d5943f |
| is_domain | False |
| name | demo |
| parent_id | None |
+-------------+----------------------------------+
* Create the ``demo`` user. Replace ``DEMO_PASS``
with a suitable password and ``EMAIL_ADDRESS`` with a suitable
e-mail address:
.. code-block:: console
$ openstack user create --domain default \
--password DEMO_PASS --email EMAIL_ADDRESS demo
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| email | demo@example.com |
| enabled | True |
| id | 58126687cbcc4888bfa9ab73a2256f27 |
| name | demo |
+-----------+----------------------------------+
* Create the ``user`` role:
.. code-block:: console
$ openstack role create user
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 997ce8d05fc143ac97d83fdfb5998552 |
| name | user |
+-------+----------------------------------+
* Add the ``user`` role to the ``demo`` project and user:
.. code-block:: console
$ openstack role add --project demo --user demo user
Verify operation
~~~~~~~~~~~~~~~~
Verify operation of the Identity service before installing other
services.
#. For security reasons, remove the admin_token value in
/etc/keystone/keystone.conf:
Edit the ``[DEFAULT]`` section and remove ``admin_token``.
#. Unset the temporary ``OS_TOKEN`` and ``OS_URL`` environment variables:
.. code-block:: console
$ unset OS_TOKEN OS_URL
#. As the ``admin`` user, request an authentication token:
.. code-block:: console
$ openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-id default --os-user-domain-id default \
--os-project-name admin --os-username admin --os-auth-type password \
token issue
Password:
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2015-03-24T18:55:01Z |
| id | ff5ed908984c4a4190f584d826d75fed |
| project_id | cf12a15c5ea84b019aec3dc45580896b |
| user_id | 4d411f2291f34941b30eef9bd797505a |
+------------+----------------------------------+
#. As the ``demo`` user, request an authentication token:
.. code-block:: console
$ openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-id default --os-user-domain-id default \
--os-project-name demo --os-username demo --os-auth-type password \
token issue
Password:
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2014-10-10T12:51:33Z |
| id | 1b87ceae9e08411ba4a16e4dada04802 |
| project_id | 4aa51bb942be4dd0ac0555d7591f80a6 |
| user_id | 7004dfa0dda84d63aef81cf7f100af01 |
+------------+----------------------------------+

355
source/openstack_image.rst Normal file
View File

@@ -0,0 +1,355 @@
OpenStack Image
###############
The OpenStack Image service (glance) enables users to discover,
register, and retrieve virtual machine images. It offers a
`REST` API that enables you to query virtual
machine image metadata and retrieve an actual image.
You can store virtual machine images made available through
the Image service in a variety of locations, from simple file
systems to object-storage systems like OpenStack Object Storage.
.. important::
For simplicity, this guide describes configuring the Image service to
use the ``file`` back end, which uploads and stores in a
directory on the controller node hosting the Image service. By
default, this directory is ``/var/lib/glance/images/``.
Before you proceed, ensure that the controller node has at least
several gigabytes of space available in this directory.
For information on requirements for other back ends, see
`Configuration Reference <http://docs.openstack.org/liberty/
config-reference/content/
ch_configuring-openstack-image-service.html>`__.
Install and configure
~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Image service,
code-named glance, on the controller node. For simplicity, this
configuration stores images on the local file system.
Prerequisites
-------------
Before you install and configure the Image service, you must
create a database, service credentials, and API endpoints.
#. To create the database, complete these steps:
* Use the database access client to connect to the database
server as the ``root`` user:
.. code-block:: console
$ mysql -u root -p
* Create the ``glance`` database:
.. code-block:: console
CREATE DATABASE glance;
* Grant proper access to the ``glance`` database:
.. code-block:: console
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';
Replace ``GLANCE_DBPASS`` with a suitable password.
* Exit the database access client.
#. Source the ``admin`` credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ source admin-openrc.sh
#. To create the service credentials, complete these steps:
* Create the ``glance`` user. Replace ``GLANCE_PASS`` with a suitable
password.
.. code-block:: console
$ openstack user create --domain default --password GLANCE_PASS glance
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | e38230eeff474607805b596c91fa15d9 |
| name | glance |
+-----------+----------------------------------+
* Add the ``admin`` role to the ``glance`` user and
``service`` project:
.. code-block:: console
$ openstack role add --project service --user glance admin
* Create the ``glance`` service entity:
.. code-block:: console
$ openstack service create --name glance \
--description "OpenStack Image service" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image service |
| enabled | True |
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| name | glance |
| type | image |
+-------------+----------------------------------+
#. Create the Image service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 340be3625e9b4239a6415d034e98aace |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0c37ed58103f4300a84ff125a539032d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
Install and configure components
--------------------------------
#. Install OpenStack Image bundle:
.. code:: console
# clr_bundle_add openstack-image
#. configurations will be located at ``/etc/glance``
* Create ``/etc/glance`` directory:
.. code:: console
# mkdir /etc/glance
* Create empty configuration files ``/etc/glance/glance-api.conf``
and ``/etc/glance/glance-registry.conf``:
.. code:: console
# touch /etc/glance/glance-{api,registry}.conf
#. Edit the ``/etc/glance/glance-api.conf`` file and complete
the following actions:
* In the ``[database]`` section, configure database access:
.. code-block:: ini
[database]
...
connection = mysql://glance:GLANCE_DBPASS@controller/glance
Replace ``GLANCE_DBPASS`` with the password you chose for the
Image service database.
* In the ``[keystone_authtoken]`` section, configure Identity
service access:
.. code-block:: ini
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = GLANCE_PASS
Replace ``GLANCE_PASS`` with the password you chose for the
``glance`` user in the Identity service.
#. Edit the ``/etc/glance/glance-registry.conf`` file and
complete the following actions:
* In the ``[database]`` section, configure database access:
.. code-block:: ini
[database]
...
connection = mysql://glance:GLANCE_DBPASS@controller/glance
Replace ``GLANCE_DBPASS`` with the password you chose for the
Image service database.
* In the ``[keystone_authtoken]`` section,configure Identity
service access:
.. code-block:: ini
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = GLANCE_PASS
Replace ``GLANCE_PASS`` with the password you chose for the
``glance`` user in the Identity service.
#. Let systemd set the correct permissions for files in ``/etc/glance``.
.. code:: console
# systemctl restart update-triggers.target
#. Populate the Image Service database:
.. code:: console
# su -s /bin/sh -c "glance-manage db_sync" glance
Finalize installation
---------------------
#. Start the Image Service services and configure them to start when the
system boots:
.. code:: console
# systemctl enable glance-api.service glance-registry.service
# systemctl start glance-api.service glance-registry.service
Verify operation
~~~~~~~~~~~~~~~~
Verify operation of the Image service using
`CirrOS <http://launchpad.net/cirros>`__, a small
Linux image that helps you test your OpenStack deployment.
For more information about how to download and build images, see
`OpenStack Virtual Machine Image Guide
<http://docs.openstack.org/image-guide/content/index.html>`__.
For information about how to manage images, see the
`OpenStack User Guide
<http://docs.openstack.org/user-guide/common/cli_manage_images.html>`__.
#. In each client environment script, configure the Image service
client to use API version 2.0:
.. code-block:: console
$ echo "export OS_IMAGE_API_VERSION=2" \
| tee -a admin-openrc.sh demo-openrc.sh
#. Source the ``admin`` credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ source admin-openrc.sh
#. Download the source image:
.. code-block:: console
$ curl -Ok http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
#. Upload the image to the Image service using the
`QCOW2` disk format, `bare` container format, and
public visibility so all projects can access it:
.. code-block:: console
$ openstack image create cirros --file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2015-10-26T23:40:03Z |
| disk_format | qcow2 |
| file | /v2/images/fcf6fa55-56e9-4402-8137-3e9315c84905/file |
| id | fcf6fa55-56e9-4402-8137-3e9315c84905 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 2e3093872ebf4143a122e2cc01a50d13 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2015-10-26T23:40:03Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
#. Confirm upload of the image and validate attributes:
.. code-block:: console
$ openstack image list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros |
+--------------------------------------+--------+

File diff suppressed because it is too large Load Diff