From: G. Ann Campbell Date: Wed, 18 Jul 2018 16:37:54 +0000 (-0400) Subject: Edit documentation X-Git-Tag: 7.5~724 X-Git-Url: https://source.dussan.org/?a=commitdiff_plain;h=568d70aa1585f5f22d82ea38600171f2c73935c8;p=sonarqube.git Edit documentation --- diff --git a/server/sonar-docs/src/EmbedDocsSuggestions.json b/server/sonar-docs/src/EmbedDocsSuggestions.json index d512919471c..996ecb4c49b 100644 --- a/server/sonar-docs/src/EmbedDocsSuggestions.json +++ b/server/sonar-docs/src/EmbedDocsSuggestions.json @@ -1,7 +1,12 @@ { "account": [], "api_documentation": [], - "background_tasks": [], + "background_tasks": [ + { + "link": "/documentation/analysis/background-tasks", + "text": "About Background Tasks" + } + ], "code": [], "coding_rules": [ { @@ -18,13 +23,27 @@ "link": "/documentation/fixing-the-water-leak", "text": "Fixing the Water Leak" }, + { + "link":"/documentation/metric-definitions", + "text":"Metric Definitions" + }, { "link": "/documentation/keyboard-shortcuts", "text": "Keyboard Shortcuts" } ], - "custom_measures": [], - "custom_metrics": [], + "custom_measures": [ + { + "link": "/documentation/custom-measures", + "text": "About Custom Measures" + } + ], + "custom_metrics": [ + { + "link": "/documentation/custom-measures", + "text": "Custom Measures" + } + ], "extension_billing": [ { "link": "/documentation/sonarcloud-pricing", @@ -71,13 +90,36 @@ { "link": "/documentation/fixing-the-water-leak", "text": "Fixing the Water Leak" + }, + { + "link": "/documentation/branches/index", + "text": "Branches Overview" + }, + { + "link": "/documentation/analysis/pull-request", + "text": "Analyzing Pull Requests" } ], "permission_templates": [], - "profiles": [], + "profiles": [ + { + "link": "/documentation/quality-profiles", + "text": "Quality Profiles" + } + ], "project_activity": [], - "project_quality_gate": [], - "project_quality_profiles": [], + "project_quality_gate": [ + { + "link": "/documentation/fixing-the-water-leak", + "text": "Fixing the Water Leak" + } + ], + "project_quality_profiles": [ + { + "link": "/documentation/quality-profiles", + "text": "About Quality Profiles" + } + ], "projects_management": [ { "link": "/documentation/analyze-a-project", @@ -107,7 +149,7 @@ "security_reports": [ { "link": "/documentation/security-reports", - "text": "Security Reports" + "text": "About Security Reports" } ], "settings": [], @@ -120,5 +162,10 @@ } ], "users": [], - "webhooks": [] + "webhooks": [ + { + "link": "/documentation/webhooks", + "text": "About Webhooks" + } + ] } diff --git a/server/sonar-docs/src/pages/analysis/background-tasks.md b/server/sonar-docs/src/pages/analysis/background-tasks.md new file mode 100644 index 00000000000..62157db41da --- /dev/null +++ b/server/sonar-docs/src/pages/analysis/background-tasks.md @@ -0,0 +1,39 @@ +--- +title: Background Tasks +--- + +A Background Task can be: +* the import of an Analysis Report +* the computation of a Portfolio +* the import or export of a project + +## What happens after the scanner is done analyzing? + +Analysis is not complete until the relevant Background Task has been completed. Even though the SonarQube Scanner's log shows `EXECUTION SUCCESS`, the analysis results will not be visible in the SonarQube project until the Background Task has been completed. After a SonarQube Scanner has finished analyzing your code, the result of the analysis (Sources, Issues, Metrics) - the Analysis Report - is sent to SonarQube Server for final processing by the Compute Engine. Analysis Reports are queued and processed serially. + +At the Project level, when there is a pending Analysis Report waiting to be consumed, you have a "Pending" notification in the header, next to the date of the most recent completed analysis. + +Global Administrators can view the current queue at **[Administration > Projects > Background Tasks](/#sonarqube-admin#/admin/background_tasks)**. Project administrators can see the tasks for a project at **Administration > Background Tasks**. + +## How do I know when analysis report processing fails? +Background tasks usually succeed, but sometimes unusual circumstances cause processing to fail. Examples include: + +* running out of memory while processing a report from a very large project +* hitting a clash between the key of an existing module or project and one in the report +* ... + +When that happens, the failed status is reflected on the project homepage, but that requires someone to notice it. You can also choose to be notified by email when background tasks fail - either on a project by project basis, or globally on all projects where you have administration rights, in the **Notifications** section of your profile. + +## How do I diagnose a failing background task? +For each Analysis Report there is a dropdown menu allowing you to access to the "Scanner Context" showing you the configuration of the Scanner at the moment when the code scan has been run. + +If processing failed for the task, an additional option will be available: "Show Error Details", to get the technical details why the processing of the Background Task failed. + +## How do I cancel a pending analysis report? +Administrators can cancel the processing of a pending task by clicking: + +* on the red 'x' available on each line of a `Pending` task +* on the red "bulk cancel" option next to the pending jobs count. This button cancels all pending tasks. + +Once processing has begun on a report, it's too late to cancel it. + diff --git a/server/sonar-docs/src/pages/analysis/index.md b/server/sonar-docs/src/pages/analysis/index.md index 066993ae887..795d07c2ecd 100644 --- a/server/sonar-docs/src/pages/analysis/index.md +++ b/server/sonar-docs/src/pages/analysis/index.md @@ -29,7 +29,7 @@ SonarQube can perform analysis on 20+ different languages. The outcome of this a * A dynamic analysis of code can be performed on certain languages. ## Will _all_ files be analyzed? -By default, only files that are recognized by a language analyzer are loaded into the project during analysis. For example if your SonarQube instance had only SonarJava SonarJS on board, all .java and .js files would be loaded, but .xml files would be ignored. However, it is possible to import all text files in a project by setting [**Settings > Exclusions > Files > Import unknown files**](/#sonarqube-admin#/admin/settings?category=exclusions) to true. +By default, only files that are recognized by a language analyzer are loaded into the project during analysis. For example if your SonarQube instance had only SonarJava SonarJS on board, all .java and .js files would be loaded, but .xml files would be ignored. ## What happens during analysis? During analysis, data is requested from the server, the files provided to the analysis are analyzed, and the resulting data is sent back to the server at the end in the form of a report, which is then analyzed asynchronously server-side. diff --git a/server/sonar-docs/src/pages/analysis/pull-request.md b/server/sonar-docs/src/pages/analysis/pull-request.md new file mode 100644 index 00000000000..603a3e2e980 --- /dev/null +++ b/server/sonar-docs/src/pages/analysis/pull-request.md @@ -0,0 +1,63 @@ +--- +title: Pull Request Analysis +--- + + + +_Pull Request analysis is available as part of [Developer Edition](https://redirect.sonarsource.com/editions/developer.html)_ + + + + +Pull Request analysis allows you to: + +* see your Pull Request (PR) analysis results in the SonarQube UI and see the green or red status to highlight the existence of open issues. +* automatically decorate your PRs with SonarQube issues in your SCM provider's interface. + +PRs are visible in SonarQube from the "branches and pull requests" dropdown menu of your project. + +When PR decoration is enabled, SonarQube publishes the status of the analysis (Quality Gate) on the PR. + +When "Confirm", "Resolved as False Positive" or "Won't Fix" actions are performed on issues in SonarQube UI, the status of the PR is updated accordingly. This means, if you want to get a green status on the PR, you can either fix the issues for real or "Confirm", "Resolved as False Positive" or "Won't Fix" any remaining issues available on the PR. + +PR analyses on SonarQube are deleted automatically after 30 days with no analysis. This can be updated in **Configuration > General > Number of days before purging inactive short living branches**. + + +## Integrations for GitHub, Bitbucket Cloud and VSTS +If your repositories are hosted on GitHub, Bitbucket Cloud or VSTS, check out first the dedicated ["Integrations" pages](/integrations/index). Chances are that you do not need to read this page further since those integrations handle the configuration and analysis parameters for you. + + +## Analysis Parameters +### Pull Request Analysis in SonarQube +These parameters enable PR analysis: + +| Parameter Name | Description | +| --------------------- | ------------------ | +| `sonar.pullrequest.branch` | The name of your PR
Ex: `sonar.pullrequest.branch=feature/my-new-feature`| +| `sonar.pullrequest.key` | Unique identifier of your PR. Must correspond to the key of the PR in GitHub or TFS.
E.G.: `sonar.pullrequest.key=5` | +| `sonar.pullrequest.base` | The long-lived branch into which the PR will be merged.
Default: master
E.G.: `sonar.pullrequest.base=master`| + +### Pull Request Decoration +To activate PR decoration, you need to: + +* declare an Authentication Token +* specify the Git provider +* feed some specific parameters (GitHub only) + +#### Authentication Token +The first thing to configure is the authentication token that will be used by SonarQube to decorate the PRs. This can be configured in **Administration > Pull Requests**. The field to configure depends on the provider. + +For GitHub Enterprise or GitHub.com, you need to configure the **Authentication token** field. For VSTS/TFS, it's the **Personal access token**. + +#### Pull Request Provider +| Parameter Name | Description | +| --------------------- | ------------------ | +| `sonar.pullrequest.provider` | `github` or `vsts`
This is the name of the system managing your PR. In VSTS/TFS, when the Analyzing with SonarQube Extension for VSTS-TFS is used, `sonar.pullrequest.provider` is automatically populated with "vsts". | + +#### GitHub Parameters +| Parameter Name | Description | +| --------------------- | ------------------ | +| `sonar.pullrequest.github.repository` | SLUG of the GitHub Repo | +| `sonar.pullrequest.github.endpoint` | The API url for your GitHub instance.
Ex.: `https://api.github.com/` or `https://github.company.com/api/v3/` | + +Note: if you were relying on the GitHub Plugin, its properties are no longer required and they must be removed from your configuration: `sonar.analysis.mode`, `sonar.github.repository`, `sonar.github.pullRequest`, `sonar.github.oauth`. diff --git a/server/sonar-docs/src/pages/analysis/scm_integration.md b/server/sonar-docs/src/pages/analysis/scm_integration.md index 6f6c685cf84..df984418238 100644 --- a/server/sonar-docs/src/pages/analysis/scm_integration.md +++ b/server/sonar-docs/src/pages/analysis/scm_integration.md @@ -9,10 +9,7 @@ Collecting SCM data during code analysis can unlock a number of SonarQube featur * SCM-driven detection of new code (to help with Fixing the Water Leak). Without SCM data, SonarQube determines new code using analysis dates (to timestamp modification of lines). ### Turning it on/off -SCM integration requires support for your individual SCM provider. Git and SVN are supported by default. - -For other SCM providers, see the Marketplace. - +SCM integration requires support for your individual SCM provider. Git and SVN are supported by default. For other SCM providers, see the Marketplace. -Then, if need be, you can toggle it off at global/project level via administration settings. +If need be, you can toggle it off at global/project level via administration settings. diff --git a/server/sonar-docs/src/pages/branches/branches-faq.md b/server/sonar-docs/src/pages/branches/branches-faq.md index 3460b07c8b9..e5925192786 100644 --- a/server/sonar-docs/src/pages/branches/branches-faq.md +++ b/server/sonar-docs/src/pages/branches/branches-faq.md @@ -38,4 +38,4 @@ Please note you cannot use `sonar.branch` together with `sonar.branch.name`. **A:** When the computation of the background task is done for a given branch but also when an issue is updated on a short-lived branch. **Q:** What is the impact on my LOCs consumption vs my license? -**A:** LOCs scanned on long-lived or short-lived branches are NOT counted so you can scan as much as you want without impact on your LOCs consumed +**A:** The LOC of your largest branch are counted toward your license limit. All other branches are ignored. diff --git a/server/sonar-docs/src/pages/branches/index.md b/server/sonar-docs/src/pages/branches/index.md index 0bf242db08d..e5fa3afb4db 100644 --- a/server/sonar-docs/src/pages/branches/index.md +++ b/server/sonar-docs/src/pages/branches/index.md @@ -2,14 +2,13 @@ title: Branches --- -## Table of Contents - - _Branch analysis is available as part of [Developer Edition](https://redirect.sonarsource.com/editions/developer.html)_ - +## Table of Contents + + Branch analysis allows you to * analyze long-lived branches diff --git a/server/sonar-docs/src/pages/custom-measures.md b/server/sonar-docs/src/pages/custom-measures.md new file mode 100644 index 00000000000..2c3979b283f --- /dev/null +++ b/server/sonar-docs/src/pages/custom-measures.md @@ -0,0 +1,15 @@ +--- +title: Custom Measures +scope: sonarqube +--- + +SonarQube collects a maximum of measures in an automated manner but there are some measures for which this is not possible, such as when: the information is not available for collection, the measure is computed by a human, and so on. Whatever the reason, SonarQube provides a service to inject those measures manually and allow you to benefit from other services: the Manual Measures service. The manual measures entered will be picked during the next analysis of the project and thereafter treated as "normal" measures. + +## Managing Custom Metrics +As with measures that are collected automatically, manual measures are the values collected in each analsis for manual metrics. Therefore, the first thing to do is create the metric you want to save your measure against. In order to do so, log in as a system administrator and go to **[Administration > Configuration > Custom Metrics](/#sonarqube-admin#/admin/custom_metrics)**, where the interface will guide you in creating the Metric you need. + +## Managing Custom Measures +Custom measures can be entered at project level. To add a measure, sign in as a project administrator, navigate to the desired project and choose **Administration > Custom Measures**, where you will find a table with the latest measure value entered for each metric. + +Values entered in this interface are "Pending", and will not be visible outside this administrative interface until the next analysis. + diff --git a/server/sonar-docs/src/pages/housekeeping.md b/server/sonar-docs/src/pages/housekeeping.md new file mode 100644 index 00000000000..e2b333382d5 --- /dev/null +++ b/server/sonar-docs/src/pages/housekeeping.md @@ -0,0 +1,18 @@ +--- +title: Housekeeping +--- + +When you run a new analysis of your project, some data that was previously available is cleaned out of the database. For example the source code of the previous analysis, measures at directory and file levels, and so on are automatically removed at the end of a new analysis. Additionally, some old analysis snapshots are also removed. + +Why? Well, it's useful to analyze a project frequently to see how its quality evolves. It is also useful to be able to see the trends over weeks, months, years. But when you look back in time, you don't really need the same level of detail as you do for the project's current state. To save space and to improve overall performance, the Database Cleaner deletes some rows in the database. Here is its default configuration: + +* For each project: + * only one snapshot per day is kept after 1 day. Snapshots marked by an event are not deleted. + * only one snapshot per week is kept after 1 month. Snapshots marked by an event are not deleted. + * only one snapshot per month is kept after 1 year. Snapshots marked by an event are not deleted. + * only snapshots with version events are kept after 2 years. Snapshots without events or with only other event types are deleted. + * **all snapshots** older than 5 years are deleted, including snapshots marked by an event. +* All closed issues more than 30 days old are deleted +* History at package/directory level is removed + +These settings can be changed at [Administration > General > Database Cleaner](/#sonarqube-admin#/admin/settings). diff --git a/server/sonar-docs/src/pages/keyboard-shortcuts.md b/server/sonar-docs/src/pages/keyboard-shortcuts.md index 3ec59aeeeab..178ccff0999 100644 --- a/server/sonar-docs/src/pages/keyboard-shortcuts.md +++ b/server/sonar-docs/src/pages/keyboard-shortcuts.md @@ -1,5 +1,6 @@ --- title: Keyboard Shortcuts +order: 99 --- ## Global diff --git a/server/sonar-docs/src/pages/look-and-feel.md b/server/sonar-docs/src/pages/look-and-feel.md new file mode 100644 index 00000000000..d74542fd730 --- /dev/null +++ b/server/sonar-docs/src/pages/look-and-feel.md @@ -0,0 +1,13 @@ +--- +title: Look and Feel +scope: sonarqube +--- + +## Home logo +You can set your own "home" logo in **[Administration > General > Look & Feel](/#sonarqube-admin#/admin/settings)**. Simply provide an image URL and width. Ideally, the width will scale the height to 30 pixels. This logo will be used in both the menu bar and on the About page. + +## Content of the "About" page +You also have the ability to add content to the About page, which anonymous users land on by default: **[Administration > General > Look & Feel](/#sonarqube-admin#/admin/settings)**. + +## Gravatar +Gravatar support is enabled by default, using gravatar.com. You can configure a different server or disable the feature altogether. When enabled, gravatars show up next to most uses of the user name. diff --git a/server/sonar-docs/src/pages/metric-definitions.md b/server/sonar-docs/src/pages/metric-definitions.md new file mode 100644 index 00000000000..737b3e656db --- /dev/null +++ b/server/sonar-docs/src/pages/metric-definitions.md @@ -0,0 +1,336 @@ +--- +title: Metric Definitions +--- + +## Table of Contents + + +## Complexity +**Complexity** (`complexity`) +It is the Cyclomatic Complexity calculated based on the number of paths through the code. Whenever the control flow of a function splits, the complexity counter gets incremented by one. Each function has a minimum complexity of 1. This calculation varies slightly by language because keywords and functionalities do. + +[[collapse]] +| ## Language-specific details +| Language | Notes +| ---|--- +| ABAP | The following keywords increase the complexity by one: `AND`, `CATCH`, `CONTINUE`, `DO`, `ELSEIF`, `IF`, `LOOP`, `LOOPAT`, `OR`, `PROVIDE`, `SELECT…ENDSELECT`, `TRY`, `WHEN`, `WHILE` +| C/C++/Objective-C | The complexity gets incremented by one for: function definitions, `while`, `do while`, `for`, `throw` statements, `switch`, `case`, `default`, `&&` operator, `||` operator, `?` ternary operator, `catch`, `break`, `continue`, `goto`. +| COBOL | The following commands increase the complexity by one (except when they are used in a copybook): `ALSO`, `ALTER`, `AND`, `DEPENDING`, `END_OF_PAGE`, `ENTRY`, `EOP`, `EXCEPTION`, `EXIT`, `GOBACK`, `CONTINUE`, `IF`, `INVALID`, `OR`, `OVERFLOW`, `SIZE`, `STOP`, `TIMES`, `UNTIL`, `USE`, `VARYING`, `WHEN`, `EXEC CICS HANDLE`, `EXEC CICS LINK`, `EXEC CICS XCTL`, `EXEC CICS RETURN` +| Java | Keywords incrementing the complexity: `if`, `for`, `while`, `case`, `catch`, `throw`, `&&`, `||`, `?` +| JavaScript, PHP | Complexity is incremented by one for each: function (i.e non-abstract and non-anonymous constructors, functions, procedures or methods), `if`, short-circuit (AKA lazy) logical conjunction (`&&`), short-circuit (AKA lazy) logical disjunction (`||`), ternary conditional expressions, loop, `case` clause of a `switch` statement, `throw` and `catch` statement, `go to` statement (only for PHP) +| PL/I | The following keywords increase the complexity by one: `PROC`, `PROCEDURE`, `GOTO`, `GO TO`, `DO`, `IF`, `WHEN`, `|`, `!`, `|=`, `!=`, `&`, `&=` +| PL/SQL | The complexity gets incremented by one for: the main PL/SQL anonymous block (not inner ones), create procedure, create trigger, procedure_definition, basic loop statement, when_clause_statement (the “when” of simple_case_statement and searched_case_statement), continue_statement, cursor_for_loop_statement, continue_exit_when_clause (The “WHEN” part of the continue and exit statements), exception_handler (every individual “WHEN”), exit_statement, for_loop_statement, forall_statement, if_statement, elsif_clause, raise_statement, return_statement, while_loop_statement, and_expression (“and” reserved word used within PL/SQL expressions), or_expression (“or” reserved word used within PL/SQL expressions), when_clause_expression (the “when” of simple_case_expression and searched_case_expression) +| VB.NET | The complexity gets incremented by one for: method or constructor declaration (Sub, Function), `AndAlso`, `Case`, `Continue`, `End`, `Error`, `Exit`, `If`, `Loop`, `On Error`, `GoTo`, `OrElse`, `Resume`, `Stop`, `Throw`, `Try`. + +**Cognitive Complexity** (`cognitive_complexity`) +How hard it is to understand the code's control flow. See [the Cognitive Complexity White Paper](https://www.sonarsource.com/resources/white-papers/cognitive-complexity.html) for a complete description of the mathematical model applied to compute this measure. + +--- +## Duplications +**Duplicated blocks** (`duplicated_blocks`) +Number of duplicated blocks of lines. + +[[collapse]] +| ## Language-specific details +| For a block of code to be considered as duplicated: +| +| Non-Java projects: +| * There should be at least 100 successive and duplicated tokens. +| * Those tokens should be spread at least on: +| * 30 lines of code for COBOL +| * 20 lines of code for ABAP +| * 10 lines of code for other languages +| +|Java projects: +| There should be at least 10 successive and duplicated statements whatever the number of tokens and lines. Differences in indentation and in string literals are ignored while detecting duplications. + +**Duplicated files** (`duplicated_files`) +Number of files involved in duplications. + +**Duplicated lines** (`duplicated_lines`) +Number of lines involved in duplications. + +**Duplicated lines (%)** (`duplicated_lines_density`) += `duplicated_lines` / `lines` * 100 + +--- +## Issues +**New issues** (`new_violations`) +Number of issues raised for the first time in the New Code period. + +**New xxx issues** (`new_xxx_violations`) +Number of issues of the specified severity raised for the first time in the New Code period, where xxx is one of: `blocker`, `critical`, `major`, `minor`, `info`. + +**Issues** (`violations`) +Total count of issues in all states. + +**xxx issues** (`xxx_issues`) +Total count of issues of the specified severity, where xxx is one of: `blocker`, `critical`, `major`, `minor`, `info`. + +**False positive issues** (`false_positive_issues`) +Total count of issues marked False Positive + +**Open issues** (`open_issues`) +Total count of issues in the Open state. + +**Confirmed issues** (`confirmed_issues`) +Total count of issues in the Confirmed state. + +**Reopened issues** (`reopened_issues`) +Total count of issues in the Reopened state + +--- +## Maintainability +**Code Smells** (`code_smells`) +Total count of Code Smell issues. + +**New Code Smells** (`new_code_smells`) +Total count of Code Smell issues raised for the first time in the New Code period. + +**Maintainability Rating** (`sqale_rating`) +(Formerly the SQALE rating.) +Rating given to your project related to the value of your Technical Debt Ratio. The default Maintainability Rating grid is: + +A=0-0.05, B=0.06-0.1, C=0.11-0.20, D=0.21-0.5, E=0.51-1 + +The Maintainability Rating scale can be alternately stated by saying that if the outstanding remediation cost is: + +* <=5% of the time that has already gone into the application, the rating is A +* between 6 to 10% the rating is a B +* between 11 to 20% the rating is a C +* between 21 to 50% the rating is a D +* anything over 50% is an E + +**Technical Debt** (`sqale_index`) +Effort to fix all Code Smells. The measure is stored in minutes in the database. An 8-hour day is assumed when values are shown in days. + +**Technical Debt on New Code** (`new_technical_debt`) +Effort to fix all Code Smells raised for the first time in the New Code period. + +**Technical Debt Ratio** (`sqale_debt_ratio`) +Ratio between the cost to develop the software and the cost to fix it. The Technical Debt Ratio formula is: + `Remediation cost / Development cost` +Which can be restated as: + `Remediation cost / (Cost to develop 1 line of code * Number of lines of code)` +The value of the cost to develop a line of code is 0.06 days. + +**Technical Debt Ratio on New Code** (`new_sqale_debt_ratio`) +Ratio between the cost to develop the code changed in the New Code period and the cost of the issues linked to it. + +--- +## Quality Gates +**Quality Gate Status** (`alert_status`) +State of the Quality Gate associated to your Project. Possible values are : `ERROR`, `WARN`, `OK` + +**Quality Gate Details** (`quality_gate_details`) +For all the conditions of your Quality Gate, you know which condition is failing and which is not. + +--- +## Reliability +**Bugs** (`bugs`) +Number of bug issues. + +**New Bugs** (`new_bugs`) +Number of new bug issues. + +**Reliability Rating** (`reliability_rating`) +A = 0 Bugs +B = at least 1 Minor Bug +C = at least 1 Major Bug +D = at least 1 Critical Bug +E = at least 1 Blocker Bug + +**Reliability remediation effort** (`reliability_remediation_effort`) +Effort to fix all bug issues. The measure is stored in minutes in the DB. An 8-hour day is assumed when values are shown in days. + +**Reliability remediation effort on new code** (`new_reliability_remediation_effort`) +Same as _Reliability remediation effort_ but on the code changed in the New Code period. + +--- +## Security +**Vulnerabilities** (`vulnerabilities`) +Number of vulnerability issues. + +**New Vulnerabilities** (`new_vulnerabilities`) +Number of new vulnerability issues. + +**Security Rating** (`security_rating`) +A = 0 Vulnerabilities +B = at least 1 Minor Vulnerability +C = at least 1 Major Vulnerability +D = at least 1 Critical Vulnerability +E = at least 1 Blocker Vulnerability + +**Security remediation effort** (`security_remediation_effort`) +Effort to fix all vulnerability issues. The measure is stored in minutes in the DB. An 8-hour day is assumed when values are shown in days. + +**Security remedation effort on new code** (`new_security_remediation_effort`) +Same as _Security remediation effort_ but on the code changed in the New Code period. + +--- +## Size +**Classes** (`classes`) +Number of classes (including nested classes, interfaces, enums and annotations). + +**Comment lines** (`comment_lines`) +Number of lines containing either comment or commented-out code. + +Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines. + +The following piece of code contains 9 comment lines: +``` +/** +0 => empty comment line + * +0 => empty comment line + * This is my documentation +1 => significant comment + * although I don't +1 => significant comment + * have much +1 => significant comment + * to say +1 => significant comment + * +0 => empty comment line + *************************** +0 => non-significant comment + * +0 => empty comment line + * blabla... +1 => significant comment + */ +0 => empty comment line + +/** +0 => empty comment line + * public String foo() { +1 => commented-out code + * System.out.println(message); +1 => commented-out code + * return message; +1 => commented-out code + * } +1 => commented-out code + */ +0 => empty comment line + ``` +[[collapse]] +| ## Language-specific details +| Language | Note +| ---|--- +| COBOL | Lines containing the following instructions are counted both as comments and lines of code: `AUTHOR`, `INSTALLATION`, `DATE-COMPILED`, `DATE-WRITTEN`, `SECURITY`. +| Java | File headers are not counted as comment lines (becuase they usually define the license). + +**Comments (%)** (`comment_lines_density`) +Density of comment lines = Comment lines / (Lines of code + Comment lines) * 100 + +With such a formula: +* 50% means that the number of lines of code equals the number of comment lines +* 100% means that the file only contains comment lines + +**Directories** (`directories`) +Number of directories. + +**Files** (`files`) +Number of files. + +**Lines** (`lines`) +Number of physical lines (number of carriage returns). + +**Lines of code** (`ncloc`) +Number of physical lines that contain at least one character which is neither a whitespace nor a tabulation nor part of a comment. +[[collapse]] +| ## Language-specific details +| Language | Note +| --- | --- +| COBOL | Generated lines of code and pre-processing instructions (`SKIP1`, `SKIP2`, `SKIP3`, `COPY`, `EJECT`, `REPLACE`) are not counted as lines of code. + +**Lines of code per language** (`ncloc_language_distribution`) +Non Commenting Lines of Code Distributed By Language + +**Functions** (`functions`) +Number of functions. Depending on the language, a function is either a function or a method or a paragraph. +[[collapse]] +| ## Language-specific details +| Language | Note +| ---|--- +| COBOL | It is the number of paragraphs. +| Java | Methods in anonymous classes are ignored. +| VB.NET | Accesors are not considered to be methods. + +**Projects** (`projects`) +Number of projects in a Portfolio. + +**Statements** (`statements`) +Number of statements. + +--- +## Tests +**Condition coverage** (`branch_coverage`) +On each line of code containing some boolean expressions, the condition coverage simply answers the following question: 'Has each boolean expression been evaluated both to true and false?'. This is the density of possible conditions in flow control structures that have been followed during unit tests execution. + +`Condition coverage = (CT + CF) / (2*B)` +where +* CT = conditions that have been evaluated to 'true' at least once +* CF = conditions that have been evaluated to 'false' at least once +* B = total number of conditions + +**Condition coverage on new code** (`new_branch_coverage`) +Identical to Condition coverage but restricted to new / updated source code. + +**Condition coverage hits** (`branch_coverage_hits_data`) +List of covered conditions. + +**Conditions by line** (`conditions_by_line`) +Number of conditions by line. + +**Covered conditions by line** (`covered_conditions_by_line`) +Number of covered conditions by line. + +**Coverage** (`coverage`) +It is a mix of Line coverage and Condition coverage. Its goal is to provide an even more accurate answer to the following question: How much of the source code has been covered by the unit tests? + +`Coverage = (CT + CF + LC)/(2*B + EL)` +where +* CT = conditions that have been evaluated to 'true' at least once +* CF = conditions that have been evaluated to 'false' at least once +* LC = covered lines = lines_to_cover - uncovered_lines +* B = total number of conditions +* EL = total number of executable lines (`lines_to_cover`) + +**Coverage on new code** (`new_coverage`) +Identical to Coverage but restricted to new / updated source code. + +**Line coverage (`line_coverage`) +On a given line of code, Line coverage simply answers the following question: Has this line of code been executed during the execution of the unit tests?. It is the density of covered lines by unit tests: + +`Line coverage = LC / EL` +where +* LC = covered lines (`lines_to_cover` - `uncovered_lines`) +* EL = total number of executable lines (`lines_to_cover`) + +**Line coverage on new code** (`new_line_coverage`) +Identical to Line coverage but restricted to new / updated source code. + +**Line coverage hits** (`coverage_line_hits_data`) +List of covered lines. + +**Lines to cover** (`lines_to_cover`) +Number of lines of code which could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover). + +**Lines to cover on new code** (`new_lines_to_cover`) +Identical to Lines to cover but restricted to new / updated source code. + +**Skipped unit tests** (`skipped_tests`) +Number of skipped unit tests. + +**Uncovered conditions** (`uncovered_conditions`) +Number of conditions which are not covered by unit tests. + +**Uncovered conditions on new code** (`new_uncovered_conditions`) +Identical to Uncovered conditions but restricted to new / updated source code. + +**Uncovered lines** (`uncovered_lines`) +Number of lines of code which are not covered by unit tests. + +**Uncovered lines on new code** (`new_uncovered_lines`) +Identical to Uncovered lines but restricted to new / updated source code. + +**Unit tests** (`tests`) +Number of unit tests. + +**Unit tests duration** (`test_execution_time`) +Time required to execute all the unit tests. + +**Unit test errors** (`test_errors`) +Number of unit tests that have failed. + +**Unit test failures** (`test_failures`) +Number of unit tests that have failed with an unexpected exception. + +**Unit test success density (%)** (`test_success_density`) +`Test success density = (Unit tests - (Unit test errors + Unit test failures)) / Unit tests * 100` diff --git a/server/sonar-docs/src/pages/privacy.md b/server/sonar-docs/src/pages/privacy.md index cc7c95dbc46..3e122cf5c6a 100644 --- a/server/sonar-docs/src/pages/privacy.md +++ b/server/sonar-docs/src/pages/privacy.md @@ -1,5 +1,6 @@ --- title: Privacy +scope: sonarcloud --- The privacy policy specifies how data collected on this website is used. Thank you for visiting our website and your interest in our services and products. As the protection of your personal data is an important concern for us, we will explain below what information we collect during your visit to our website, as they are processed and whether or how these may be used. diff --git a/server/sonar-docs/src/pages/pull_request.md b/server/sonar-docs/src/pages/pull_request.md deleted file mode 100644 index 603a3e2e980..00000000000 --- a/server/sonar-docs/src/pages/pull_request.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: Pull Request Analysis ---- - - - -_Pull Request analysis is available as part of [Developer Edition](https://redirect.sonarsource.com/editions/developer.html)_ - - - - -Pull Request analysis allows you to: - -* see your Pull Request (PR) analysis results in the SonarQube UI and see the green or red status to highlight the existence of open issues. -* automatically decorate your PRs with SonarQube issues in your SCM provider's interface. - -PRs are visible in SonarQube from the "branches and pull requests" dropdown menu of your project. - -When PR decoration is enabled, SonarQube publishes the status of the analysis (Quality Gate) on the PR. - -When "Confirm", "Resolved as False Positive" or "Won't Fix" actions are performed on issues in SonarQube UI, the status of the PR is updated accordingly. This means, if you want to get a green status on the PR, you can either fix the issues for real or "Confirm", "Resolved as False Positive" or "Won't Fix" any remaining issues available on the PR. - -PR analyses on SonarQube are deleted automatically after 30 days with no analysis. This can be updated in **Configuration > General > Number of days before purging inactive short living branches**. - - -## Integrations for GitHub, Bitbucket Cloud and VSTS -If your repositories are hosted on GitHub, Bitbucket Cloud or VSTS, check out first the dedicated ["Integrations" pages](/integrations/index). Chances are that you do not need to read this page further since those integrations handle the configuration and analysis parameters for you. - - -## Analysis Parameters -### Pull Request Analysis in SonarQube -These parameters enable PR analysis: - -| Parameter Name | Description | -| --------------------- | ------------------ | -| `sonar.pullrequest.branch` | The name of your PR
Ex: `sonar.pullrequest.branch=feature/my-new-feature`| -| `sonar.pullrequest.key` | Unique identifier of your PR. Must correspond to the key of the PR in GitHub or TFS.
E.G.: `sonar.pullrequest.key=5` | -| `sonar.pullrequest.base` | The long-lived branch into which the PR will be merged.
Default: master
E.G.: `sonar.pullrequest.base=master`| - -### Pull Request Decoration -To activate PR decoration, you need to: - -* declare an Authentication Token -* specify the Git provider -* feed some specific parameters (GitHub only) - -#### Authentication Token -The first thing to configure is the authentication token that will be used by SonarQube to decorate the PRs. This can be configured in **Administration > Pull Requests**. The field to configure depends on the provider. - -For GitHub Enterprise or GitHub.com, you need to configure the **Authentication token** field. For VSTS/TFS, it's the **Personal access token**. - -#### Pull Request Provider -| Parameter Name | Description | -| --------------------- | ------------------ | -| `sonar.pullrequest.provider` | `github` or `vsts`
This is the name of the system managing your PR. In VSTS/TFS, when the Analyzing with SonarQube Extension for VSTS-TFS is used, `sonar.pullrequest.provider` is automatically populated with "vsts". | - -#### GitHub Parameters -| Parameter Name | Description | -| --------------------- | ------------------ | -| `sonar.pullrequest.github.repository` | SLUG of the GitHub Repo | -| `sonar.pullrequest.github.endpoint` | The API url for your GitHub instance.
Ex.: `https://api.github.com/` or `https://github.company.com/api/v3/` | - -Note: if you were relying on the GitHub Plugin, its properties are no longer required and they must be removed from your configuration: `sonar.analysis.mode`, `sonar.github.repository`, `sonar.github.pullRequest`, `sonar.github.oauth`.