Procházet zdrojové kódy

SONAR-12066 Update Docs for Scalability

tags/7.8
MikeBirnstiehl před 5 roky
rodič
revize
169b1ef0c3

+ 1
- 0
server/sonar-docs/src/pages/instance-administration/monitoring.md Zobrazit soubor

@@ -62,6 +62,7 @@ All these MBeans are read-only. It's not possible to modify or reset their value
| InProgressCount | Number of Background Tasks currently under processing. Its value is either 1 or 0, since SonarQube can process only one task at a time.
| SuccessCount | Number of Background Tasks successfully processed since the last restart of SonarQube
| WorkerCount | Number of Background Tasks that can be processed at the same time
| PendingTime | Pending time (in ms) of the oldest Background Task waiting to be processed. This measure, together with PendingCount, helps you know if analyses are stacking and taking too long to start processing. This helps you evaluate if it might be worth configuring additional Compute Engine workers (Enterprise Edition) or additional nodes (Data Center Edition) to improve SonarQube performance.
|
| Note:
| * the total number of Background Tasks handled since the last restart of SonarQube is equal to SuccessCount + ErrorCount

+ 40
- 38
server/sonar-docs/src/pages/setup/install-cluster.md Zobrazit soubor

@@ -11,14 +11,15 @@ The Data Center Edition allows SonarQube to run in a clustered configuration to

## Overview

The only supported configuration for the Data Center Edition comprises 5 application servers, a load balancer and a database server:
The default configuration for the Data Center Edition comprises five servers, a load balancer, and a database server:

- 2 application nodes responsible for handling web requests from users (WebServer process) and handling analysis reports (ComputeEngine process)
- 3 search nodes that host Elasticsearch process that will store indices of data. For performance reasons, SSD are significantly better than HDD for these nodes
- PostgreSQL, Oracle or Microsoft SQL Server database server. This software must be supplied by the installing organization
- A reverse proxy / load balancer to load balance traffic between the two application nodes. This hardware or software component must be supplied by the installing organization
- Two application nodes responsible for handling web requests from users (WebServer process) and handling analysis reports (ComputeEngine process). You can add application nodes to increase computing capabilities.
- Three search nodes that host the Elasticsearch process that will store data indices. SSDs perform significantly better than HDDs for these nodes.
- A reverse proxy / load balancer to load balance traffic between the two application nodes. The installing organization must supply this hardware or software component.
- PostgreSQL, Oracle, or Microsoft SQL Server database server. This software must be supplied by the installing organization.

With this configuration, one application node and one search node can be lost without impacting users. Here is a diagram of the supported topology:

With this configuration, one application node and one search node can be lost without impacting users. Here is a diagram of the default topology:

![DCE Cluster Machines Topology.](/images/cluster-dce.png)

@@ -26,18 +27,18 @@ With this configuration, one application node and one search node can be lost wi

### Network

All servers, including database server, must be co-located (geographical redundancy is not supported) and have static IP addressess (reference via hostname is not supported). Network traffic should not be restricted between application and search nodes.
All servers, including the database server, must be co-located (geographical redundancy is not supported) and have static IP addresses (reference via hostname is not supported). Network traffic should not be restricted between application and search nodes.

### Servers

Five servers are needed to form the SonarQube application cluster. Servers can be VMs; it is not necessary to use physical machines.
You need a minimum of five servers (two application nodes and three search nodes) to form a SonarQube application cluster. You can add application nodes to increase computing capabilities. Servers can be virtual machines; it is not necessary to use physical machines.

The operating system requirements for servers are available on the [Requirements](/requirements/requirements/) page. All application nodes should be identical in terms of hardware and software. Similarly, all search nodes should be identical to each other. But application and search nodes can differ. Generally, search nodes are configured with more CPU and RAM than application nodes.
The operating system requirements for servers are available on the [Requirements](/requirements/requirements/) page. All application nodes should be identical in terms of hardware and software. Similarly, all search nodes should be identical to each other. Application and search nodes, however, can differ from one another. Generally, search nodes are configured with more CPU and RAM than application nodes.

Here are the types of machines we use to perform our validation with a 200M issues database. This could be used as a minimum recommendation to build your cluster.
Here are the machines we used to perform our validation with a 200M issues database. You can use this as a minimum recommendation to build your cluster.

- App Node made of [Amazon EC2 m4.xlarge](https://aws.amazon.com/ec2/instance-types/): 4 vCPUs, 16GB RAM
- Search Node made of [Amazon EC2 m4.2xlarge](https://aws.amazon.com/ec2/instance-types/): 8 vCPUs, 32GB RAM - 16GB allocated to Elasticsearch. SSDs perform significantly better than HDDs for these nodes
- Search Node made of [Amazon EC2 m4.2xlarge](https://aws.amazon.com/ec2/instance-types/): 8 vCPUs, 32GB RAM - 16GB allocated to Elasticsearch. SSDs perform significantly better than HDDs for these nodes.

### Database Server

@@ -47,13 +48,13 @@ Supported database systems are available on the [Requirements](/requirements/req

SonarSource does not provide specific recommendations for reverse proxy / load balancer or solution-specific configuration. The general requirements to use with SonarQube Data Center Edition are:

- Ability to balance HTTP requests (load) between the two application nodes configured in the SonarQube cluster
- If terminating HTTPS, meets the requirements set out in [Securing SonarQube Behind a Proxy](/setup/operate-server/)
- No requirement to preserve or sticky sessions; this is handled by the built-in JWT mechanism
- Ability to balance HTTP requests (load) between the application nodes configured in the SonarQube cluster.
- If terminating HTTPS, meets the requirements set out in [Securing SonarQube Behind a Proxy](/setup/operate-server/).
- No requirement to preserve or sticky sessions; this is handled by the built-in JWT mechanism.

### License

A dedicated license to activate the Data Center Edition. If you don't have it yet, please contact the SonarSource Sales Team.
You need a dedicated license to activate the Data Center Edition. If you don't have one yet, please contact the SonarSource Sales Team.

### Support

@@ -61,7 +62,7 @@ Don't start this journey alone! As a Data Center Edition subscriber, SonarSourc

## Configuration

Additional parameters are required to activate clustering capabilities and specialize each node. These parameters are in addition to standard configuration properties used in a single node configuration.
Additional parameters are required to activate clustering capabilities and specialize each node. These parameters are in addition to standard configuration properties used in a single-node configuration.

The **sonar.properties** file on each node will be edited to configure the node's specialization. A list of all cluster-specific configuration parameters is available in the [Operate the Cluster](/setup/operate-cluster/) documentation.

@@ -75,16 +76,18 @@ echo -n "your_secret" | openssl dgst -sha256 -hmac "your_key" -binary | base64

The following example represents the minimal parameters required to configure a SonarQube cluster. The example assumes:

- The VMs (server1, server2) having IP addresses ip1 and ip2 are going to be application nodes
- The VMs having IP addresses ip3, ip4 and ip5 (server3, server4 and server5) are going to be search nodes
- The VMs having IP addresses ip1 and ip2 (server1, server2) are application nodes
- The VMs having IP addresses ip3, ip4, and ip5 (server3, server4 and server5) are search nodes

The configuration to be added to sonar.properties for each node is the following:

#### Application Nodes

**server1**
```
...
sonar.cluster.enabled=true
sonar.cluster.hosts=ip1,ip2,ip3,ip4,ip5
sonar.cluster.hosts=ip1,ip2
sonar.cluster.search.hosts=ip3,ip4,ip5
sonar.cluster.node.type=application
sonar.auth.jwtBase64Hs256Secret=YOURGENERATEDSECRET
@@ -95,18 +98,19 @@ sonar.auth.jwtBase64Hs256Secret=YOURGENERATEDSECRET
```
...
sonar.cluster.enabled=true
sonar.cluster.hosts=ip1,ip2,ip3,ip4,ip5
sonar.cluster.hosts=ip1,ip2
sonar.cluster.search.hosts=ip3,ip4,ip5
sonar.cluster.node.type=application
sonar.auth.jwtBase64Hs256Secret=YOURGENERATEDSECRET
...
```

#### Search Nodes

**server3**
```
...
sonar.cluster.enabled=true
sonar.cluster.hosts=ip1,ip2,ip3,ip4,ip5
sonar.cluster.search.hosts=ip3,ip4,ip5
sonar.cluster.node.type=search
sonar.search.host=ip3
@@ -117,7 +121,6 @@ sonar.search.host=ip3
```
...
sonar.cluster.enabled=true
sonar.cluster.hosts=ip1,ip2,ip3,ip4,ip5
sonar.cluster.search.hosts=ip3,ip4,ip5
sonar.cluster.node.type=search
sonar.search.host=ip4
@@ -128,7 +131,6 @@ sonar.search.host=ip4
```
...
sonar.cluster.enabled=true
sonar.cluster.hosts=ip1,ip2,ip3,ip4,ip5
sonar.cluster.search.hosts=ip3,ip4,ip5
sonar.cluster.node.type=search
sonar.search.host=ip5
@@ -137,35 +139,35 @@ sonar.search.host=ip5

## Sample Installation Process

The following is an example of the SonarQube cluster installation process. You need to tailor these steps to the specifics of the target installation environment and the operational requirements of the hosting organization.
The following is an example of the default SonarQube cluster installation process. You need to tailor your installation to the specifics of the target installation environment and the operational requirements of the hosting organization.

**Prepare the cluster environment:**

1. Prepare the cluster environment by setting up the network, provisioning nodes and load balancer.
2. Follow the [Installing the Server](/setup/install-server/) documentation to configure the database server
1. Prepare the cluster environment by setting up the network and provisioning the nodes and load balancer.
2. Follow the [Installing the Server](/setup/install-server/) documentation to configure the database server.

**Prepare a personalized SonarQube package:**

1. On a single application node of the cluster, download and install SonarQube Data Center Edition, following the usual [Installing the Server](/setup/install-server/) documentation.
2. Add cluster-related parameters to `$SONARQUBE_HOME/conf/sonar.properties`
3. As the Marketplace is not available in SonarQube Data Center Edition, this is a good opportunity to install additional plugins. Download and place a copy of each plugin JAR in `$SONARQUBE_HOME/extensions/plugins`. Be sure to check compatibility with your SonarQube version using the [Plugin Version Matrix](https://docs.sonarqube.org/display/PLUG/Plugin+Version+Matrix)
4. Zip the directory `$SONARQUBE_HOME`. This archive is a customized SonarQube Data Center Edition package that can be copied to other nodes
2. Add cluster-related parameters to `$SONARQUBE_HOME/conf/sonar.properties`.
3. As the Marketplace is not available in SonarQube Data Center Edition, this is a good opportunity to install additional plugins. Download and place a copy of each plugin JAR in `$SONARQUBE_HOME/extensions/plugins`. Be sure to check compatibility with your SonarQube version using the [Plugin Version Matrix](https://docs.sonarqube.org/display/PLUG/Plugin+Version+Matrix).
4. Zip the directory `$SONARQUBE_HOME`. This archive is a customized SonarQube Data Center Edition package that can be copied to other nodes.

**Test configuration on a single node:**

1. On the application node where you created your Zip package, comment out all cluster-related parameters in `$SONARQUBE_HOME/conf/sonar.properties`
2. Configure the load balancer to proxy with single application node
3. Start server and test access through load balancer
4. Request license from SonarSource Sales Team
1. On the application node where you created your Zip package, comment out all cluster-related parameters in `$SONARQUBE_HOME/conf/sonar.properties`.
2. Configure the load balancer to proxy with single application node.
3. Start server and test access through load balancer.
4. Request license from SonarSource Sales Team.
5. After applying license, you will have a full-featured SonarQube system operating on a single node.

**Deploy SonarQube package on other nodes:**

1. Unzip SonarQube package on the other four nodes
2. Configure node-specific parameters on all five nodes in `$SONARQUBE_HOME/conf/sonar.properties` and ensure application node-specific and search node-specific parameters are properly set
3. Start all search nodes
4. After all search nodes are running, start all application nodes
5. Configure the load balancer to proxy with both application nodes
1. Unzip SonarQube package on the other four nodes.
2. Configure node-specific parameters on all five nodes in `$SONARQUBE_HOME/conf/sonar.properties` and ensure application node-specific and search node-specific parameters are properly set.
3. Start all search nodes.
4. After all search nodes are running, start all application nodes.
5. Configure the load balancer to proxy with both application nodes.

Congratulations, you have a fully-functional SonarQube cluster. Once these steps are complete, take a break and a coffee, then you can [Operate your Cluster](/setup/operate-cluster/).


+ 55
- 20
server/sonar-docs/src/pages/setup/operate-cluster.md Zobrazit soubor

@@ -3,33 +3,66 @@ title: Configure & Operate a Cluster
url: /setup/operate-cluster/
---

_High Availability is a feature of the [Data Center Edition](https://redirect.sonarsource.com/editions/datacenter.html)._
_High Availability and cluster scalability are features of the [Data Center Edition](https://redirect.sonarsource.com/editions/datacenter.html)._

Once the [SonarQube cluster is installed](/setup/install-cluster/), you have a High Availability configuration that allows your SonarQube instance to stay up and running even if there is a crash or failure in one of the cluster's nodes. Your SonarQube cluster is also scalable, and you can add application nodes to increase your computing capabilities.

## Start, Stop, or Upgrade the Cluster

Once the the [SonarQube cluster is installed](/setup/install-cluster/), you have a High Availability configuration that will allow your SonarQube instance to stay up and running even if there is a crash or failure in one of the nodes of the cluster.

## Start/Stop/Upgrade the Cluster
### Start the Cluster
To start a cluster, you need to follow these steps in order:

1. Start the search nodes
1. Start the application nodes

### Stop the Cluster
To stop a cluster, you need to follow these steps in order:

1. Stop the application nodes
1. Stop the search nodes

### Upgrade SonarQube
1. Stop the cluster
1. Upgrade SonarQube on all nodes (app part, plugins, JDBC driver if required) following the usual Upgrade procedure but without triggering the /setup phase
1. Once all nodes have the same binaries: start the cluster
1. At this point only one of the application nodes is up. Try to access `node_ip:port/setup` on each server, and trigger the setup operation on the one that responds.

## Install/Upgrade a Plugin
1. Stop the cluster
1. Upgrade the plugin on all nodes
Start the cluster

## Monitoring
1. Stop the cluster.
1. Upgrade SonarQube on all nodes (application part, plugins, JDBC driver if required) following the usual upgrade procedure but without triggering the /setup phase.
1. Once all nodes have the same binaries: restart the cluster.
1. At this point, only one of the application nodes is up. Try to access `node_ip:port/setup` on each application node, and trigger the setup operation on the one that responds.

## Start or Stop a Node
You can start or stop a single node in the same way as starting and stopping an instance using a single server. By default, it's a graceful shutdown where no new analysis report processing can start, but the tasks in progress are allowed to finish.

## Install or Upgrade a Plugin
1. Stop the application nodes.
1. Install or upgrade the plugin on the application nodes.
* If upgrading, remove the old version.
* You don't need to install plugins on search nodes.
1. Restart the application nodes.

## Scalability
You have the option of adding application nodes (up to 10 total application nodes) to your cluster to increase computing capabilities.

### Adding an Application Node
To add an Application Node:

1. Configure your new application node in sonar.properties. The following is an example of the configuration to be added to sonar.properties for a sixth application node (server6, ip6) in a cluster with the default five servers:

**server6**
```
...
sonar.cluster.enabled=true
sonar.cluster.hosts=ip1,ip2,ip6
sonar.cluster.search.hosts=ip3,ip4,ip5
sonar.cluster.node.type=application
sonar.auth.jwtBase64Hs256Secret=YOURGENERATEDSECRET
...
```
2. Update the configuration of the preexisting nodes to include your new node.

While you don't need to restart the cluster after adding a node, you should ensure the configuration is up to date on all of your nodes to avoid issues when you eventually do need to restart.

### Removing an Application Node
When you remove an application node, make sure to update the configuration of the remaining nodes. Much like adding a node, while you don't need to restart the cluster after removing a node, you should ensure the configuration is up to date on all of your nodes to avoid issues when you eventually do need to restart.

## Monitoring
CPU and RAM usage on each node have to be monitored separately with an APM.

In addition, we provide a Web API _api/system/health_ you can use to validate that all of the nodes in your cluster are operational.
@@ -38,10 +71,13 @@ In addition, we provide a Web API _api/system/health_ you can use to validate th
* YELLOW: SonarQube is usable, but it needs attention in order to be fully operational
* RED: SonarQube is not operational

To call it from a monitoring system without having to give admin credentials, it is possible to setup a System Passcode through the property `sonar.web.systemPasscodez. This must be configured in _$SONARQUBE-HOME/conf/sonar.properties_.
To call it from a monitoring system without having to give admin credentials, it is possible to setup a System Passcode through the property `sonar.web.systemPasscode`. This must be configured in _$SONARQUBE-HOME/conf/sonar.properties_.

### Cluster Status
On the System Info page at **Administration > System**, you can check whether your cluster is running safely (green) or has some nodes with problems (orange or red).

### Manually Check the Status of your SQ Cluster from the UI
In the System Info page, you can check whether your cluster is running safely (green) or has some nodes with problems (orange or red).
### Maximum Pending Time for Tasks
On the global Background Tasks page at **Administration > Projects > Background Tasks**, you can see the number of **pending** tasks as well as the maximum **pending time** for the tasks in the queue. This shows the pending time of the oldest background task waiting to be processed. You can use this to evaluate if it might be worth configuring additional Compute Engine workers (Enterprise Edition) or additional nodes (Data Center Edition) to improve SonarQube performance.

## Compute Engine Workers
If you change the number of [Compute Engine workers](/instance-administration/compute-engine-performance/) in the UI, you must restart each application node to have the change take effect.
@@ -64,7 +100,6 @@ Property | Description | Default | Required |
---|---|---|---|
`sonar.cluster.enabled`|Set to `true` in each node to activate the cluster mode|`false`|yes
`sonar.cluster.name`|The name of the cluster. **Required if multiple clusters are present on the same network.** For example this prevents mixing Production and Preproduction clusters. This will be the name stored in the Hazelcast cluster and used as the name of the Elasticsearch cluster.|`sonarqube`|no
`sonar.cluster.hosts`|Comma-delimited list of all **application** hosts in the cluster. This value must contain **only application hosts**. Each item in the list must contain the port if the default `sonar.cluster.node.port` value is not used. Item format is `sonar.cluster.node.host` or `sonar.cluster.node.host:sonar.cluster.node.port`.| |yes
`sonar.cluster.search.hosts`|Comma-delimited list of search hosts in the cluster. Each item in the list must contain the port if the default `sonar.search.port` value is not used. Item format is `sonar.search.host` or `sonar.search.host:sonar.search.port`.| |yes
`sonar.cluster.node.name`|The name of the node that is used on Elasticsearch and stored in Hazelcast member attribute (NODE_NAME) for sonar-application|`sonarqube-{UUID}`|no
`sonar.cluster.node.type`|Type of node: either `application` or `search`| |yes
@@ -74,6 +109,7 @@ Property | Description | Default | Required |
### Application nodes
Property | Description | Required
---|---|---|---
`sonar.cluster.hosts`|Comma-delimited list of all **application** hosts in the cluster. This value must contain **only application hosts**. Each item in the list must contain the port if the default `sonar.cluster.node.port` value is not used. Item format is `sonar.cluster.node.host` or `sonar.cluster.node.host:sonar.cluster.node.port`.|yes
`sonar.cluster.node.port`|The Hazelcast port for communication with each application member of the cluster. Default: `9003`|no|
`sonar.cluster.node.web.port`|Hazelcast port for communication with the ComputeEngine process. Port must be accessible to all other search and application nodes. If not specified, a dynamic port will be chosen and all ports must be open among the nodes.|no
`sonar.cluster.node.ce.port`|Hazelcast port for communication with the WebServer process. Port must be accessible to all other search and application nodes. If not specified, a dynamic port will be chosen and all ports must be open among the nodes.|no
@@ -100,4 +136,3 @@ No. Multicast is disabled. All hosts (IP+port) must be listed.
Yes, but the best is to have 5 machines to be really resilient to failures.
### Can the members of a cluster be discovered automatically?
No, all nodes must be configured in _$SONARQUBE-HOME/conf/sonar.properties_


+ 24
- 3
server/sonar-docs/src/pages/setup/operate-server.md Zobrazit soubor

@@ -7,19 +7,36 @@ url: /setup/operate-server/

## Running SonarQube as a Service on Windows

### Install/uninstall NT service (may have to run these files via Run As Administrator):
### Install or Uninstall NT Service (may have to run these files via Run As Administrator):

```
%SONARQUBE_HOME%/bin/windows-x86-64/InstallNTService.bat
%SONARQUBE_HOME%/bin/windows-x86-64/UninstallNTService.bat
```

### Start/stop the service:
### Start or Stop the Service:

```
%SONARQUBE_HOME%/bin/windows-x86-64/StartNTService.bat
%SONARQUBE_HOME%/bin/windows-x86-64/StopNTService.bat
```
**Note:** `%SONARQUBE_HOME%/bin/windows-x86-64/StopNTService.bat` does a graceful shutdown where no new analysis report processing can start, but the tasks in progress are allowed to finish. The time a stop will take depends on the processing time of the tasks in progress. You'll need to kill all SonarQube processes manually to force a stop.

## Running SonarQube Manually on Linux

### Start or Stop the Instance

```
Start:
$SONAR_HOME/bin/linux-x86-64/sonar.sh start

Graceful shutdown:
$SONAR_HOME/bin/linux-x86-64/sonar.sh stop

Hard stop:
$SONAR_HOME/bin/linux-x86-64/sonar.sh force-stop
```
**Note:** Stop does a graceful shutdown where no new analysis report processing can start, but the tasks in progress are allowed to finish. The time a stop will take depends on the processing time of the tasks in progress. Use force stop for a hard stop.

## Running SonarQube as a Service on Linux with SystemD

@@ -57,7 +74,7 @@ WantedBy=multi-user.target
* Because the sonar-application jar name ends with the version of SonarQube, you will need to adjust the `ExecStart` command accordingly on install and at each upgrade.
* The SonarQube data directory, `/opt/sonarqube/data`, and the extensions directory, `/opt/sonarqube/extensions` should be owned by the `sonarqube` user. As a good practice, the rest should be owned by `root`

Once your `sonarqube.service` file is created and properly configured, run
Once your `sonarqube.service` file is created and properly configured, run:
```
sudo systemctl enable sonarqube.service
sudo systemctl start sonarqube.service
@@ -97,6 +114,10 @@ sudo ln -s $SONAR_HOME/bin/linux-x86-64/sonar.sh /usr/bin/sonar
sudo chmod 755 /etc/init.d/sonar
sudo chkconfig --add sonar
```
Once registration is done, run:
```
sudo service sonar start
```

## Securing the Server Behind a Proxy


Načítá se…
Zrušit
Uložit