aboutsummaryrefslogtreecommitdiffstats
path: root/server
diff options
context:
space:
mode:
authorcolin-mueller-sonarsource <colin.mueller@sonarsource.com>2020-03-06 16:17:09 +0100
committersonartech <sonartech@sonarsource.com>2020-03-06 20:04:32 +0000
commit240f3d8bb56af9f162d61eb9fca4991b362198cd (patch)
treea9028d222ad91774f2e30ecb5040d7e46a5af815 /server
parentfb03a19566ef193e4860fb276824ff315ca5047d (diff)
downloadsonarqube-240f3d8bb56af9f162d61eb9fca4991b362198cd.tar.gz
sonarqube-240f3d8bb56af9f162d61eb9fca4991b362198cd.zip
DOCS Remove true requirements from HW reccomendations
Diffstat (limited to 'server')
-rw-r--r--server/sonar-docs/src/pages/instance-administration/monitoring.md2
-rw-r--r--server/sonar-docs/src/pages/requirements/hardware-recommendations.md13
2 files changed, 2 insertions, 13 deletions
diff --git a/server/sonar-docs/src/pages/instance-administration/monitoring.md b/server/sonar-docs/src/pages/instance-administration/monitoring.md
index fbabc6ebcde..f13cb0597fc 100644
--- a/server/sonar-docs/src/pages/instance-administration/monitoring.md
+++ b/server/sonar-docs/src/pages/instance-administration/monitoring.md
@@ -31,7 +31,7 @@ You may need to increase your memory settings if you see the following symptoms:
You can increase the maximum memory allocated to the appropriate process by increasing the -Xmx memory setting for the corresponding Java process in your _$SONARQUBE-HOME/conf/sonar.properties_ file:
* For Web: sonar.web.javaOpts
-* For ElasticSearch: sonar.search.javaOpts
+* For ElasticSearch: sonar.search.javaOpts (It is recommended to set the min and max memory to the same value to prevent the heap from resizing at runtime, a very costly process)
* For Compute Engine: sonar.ce.javaOpts
The -Xmx parameter accepts numbers in both megabytes (e.g. -Xmx2048m) and gigabytes (e.g. -Xmx2G)
diff --git a/server/sonar-docs/src/pages/requirements/hardware-recommendations.md b/server/sonar-docs/src/pages/requirements/hardware-recommendations.md
index b4801e1648c..cc799080597 100644
--- a/server/sonar-docs/src/pages/requirements/hardware-recommendations.md
+++ b/server/sonar-docs/src/pages/requirements/hardware-recommendations.md
@@ -16,15 +16,11 @@ In case your SonarQube Server is running on Linux and you are using Oracle, the
## Elasticsearch (ES)
* [Elasticsearch](https://www.elastic.co/) is used by SonarQube in the background in the SearchServer process. To ensure good performance of your SonarQube, you need to follow these recommendations that are linked to ES usage.
-### JVM
-* It is recommended to set the min and max memory to the same value to prevent the heap from resizing at runtime, a very costly process. See -Xms and -Xmx of property `sonar.search.javaOpts`.
-
### Disk
* Free disk space is an absolute requirement. ES implements a safety mechanism to prevent the disk from being flooded with index data that locks all indices in read-only mode when a 95% disk usage watermark is reached. For information on recovering from ES read-only indices, see the [Troubleshooting](/setup/troubleshooting/) page.
* Disk can easily become the bottleneck of ES. If you can afford SSDs, they are by far superior to any spinning media. SSD-backed nodes see boosts in both query and indexing performance. If you use spinning media, try to obtain the fastest disks possible (high performance server disks 15k RPM drives).
-* Make sure to increase the number of open files descriptors on the machine (or for the user running SonarQube server). Setting it to 32k or even 64k is recommended. See [this ElasticSearch article](https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html).
* Using RAID 0 is an effective way to increase disk speed, for both spinning disks and SSD. There is no need to use mirroring or parity variants of RAID because of Elasticsearch replicas and database primary storage.
-8 Do not use remote-mounted storage, such as NFS, SMB/CIFS or network-attached storages (NAS). They are often slower, display larger latencies with a wider deviation in average latency, and are a single point of failure.
+* Do not use remote-mounted storage, such as NFS, SMB/CIFS or network-attached storages (NAS). They are often slower, display larger latencies with a wider deviation in average latency, and are a single point of failure.
**Advanced**
* If you are using SSD, make sure your OS I/O Scheduler is configured correctly. When you write data to disk, the I/O Scheduler decides when that data is actually sent to the disk. The default under most *nix distributions is a scheduler called cfq (Completely Fair Queuing). This scheduler allocates "time slices" to each process, and then optimizes the delivery of these various queues to the disk. It is optimized for spinning media: the nature of rotating platters means it is more efficient to write data to disk based on physical layout. This is very inefficient for SSD, however, since there are no spinning platters involved. Instead, deadline or noop should be used instead. The deadline scheduler optimizes based on how long writes have been pending, while noop is just a simple FIFO queue. This simple change can have dramatic impacts.
@@ -34,13 +30,6 @@ In case your SonarQube Server is running on Linux and you are using Oracle, the
* Machine available memory for OS must be at least the Elasticsearch heap size. The reason is that Lucene (used by ES) is designed to leverage the underlying OS for caching in-memory data structures. That means that by default OS must have at least 1Gb of available memory.
* Don't allocate more than 32Gb. See [this ElasticSearch article](http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html) for more details.
-**Advanced**
-* Elasticsearch uses a hybrid mmapfs / niofs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions. On Linux, you can increase the limits by running the following command as root :
-```
-sysctl -w vm.max_map_count=262144
-```
-To set this value permanently, update the `vm.max_map_count` setting in `/etc/sysctl.conf`.
-
### CPU
* If you need to choose between faster CPUs or more cores, then choose more cores. The extra concurrency that multiple cores offers will far outweigh a slightly faster clock-speed.
* By nature data are distributed on multiples nodes, so execution time depends on the slowest node. It's better to have multiple medium boxes than one fast + one slow.