You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

gpg_key_commit_verification.go 17KB

Add context cache as a request level cache (#22294) To avoid duplicated load of the same data in an HTTP request, we can set a context cache to do that. i.e. Some pages may load a user from a database with the same id in different areas on the same page. But the code is hidden in two different deep logic. How should we share the user? As a result of this PR, now if both entry functions accept `context.Context` as the first parameter and we just need to refactor `GetUserByID` to reuse the user from the context cache. Then it will not be loaded twice on an HTTP request. But of course, sometimes we would like to reload an object from the database, that's why `RemoveContextData` is also exposed. The core context cache is here. It defines a new context ```go type cacheContext struct { ctx context.Context data map[any]map[any]any lock sync.RWMutex } var cacheContextKey = struct{}{} func WithCacheContext(ctx context.Context) context.Context { return context.WithValue(ctx, cacheContextKey, &cacheContext{ ctx: ctx, data: make(map[any]map[any]any), }) } ``` Then you can use the below 4 methods to read/write/del the data within the same context. ```go func GetContextData(ctx context.Context, tp, key any) any func SetContextData(ctx context.Context, tp, key, value any) func RemoveContextData(ctx context.Context, tp, key any) func GetWithContextCache[T any](ctx context.Context, cacheGroupKey string, cacheTargetID any, f func() (T, error)) (T, error) ``` Then let's take a look at how `system.GetString` implement it. ```go func GetSetting(ctx context.Context, key string) (string, error) { return cache.GetWithContextCache(ctx, contextCacheKey, key, func() (string, error) { return cache.GetString(genSettingCacheKey(key), func() (string, error) { res, err := GetSettingNoCache(ctx, key) if err != nil { return "", err } return res.SettingValue, nil }) }) } ``` First, it will check if context data include the setting object with the key. If not, it will query from the global cache which may be memory or a Redis cache. If not, it will get the object from the database. In the end, if the object gets from the global cache or database, it will be set into the context cache. An object stored in the context cache will only be destroyed after the context disappeared.
1 year ago
Add context cache as a request level cache (#22294) To avoid duplicated load of the same data in an HTTP request, we can set a context cache to do that. i.e. Some pages may load a user from a database with the same id in different areas on the same page. But the code is hidden in two different deep logic. How should we share the user? As a result of this PR, now if both entry functions accept `context.Context` as the first parameter and we just need to refactor `GetUserByID` to reuse the user from the context cache. Then it will not be loaded twice on an HTTP request. But of course, sometimes we would like to reload an object from the database, that's why `RemoveContextData` is also exposed. The core context cache is here. It defines a new context ```go type cacheContext struct { ctx context.Context data map[any]map[any]any lock sync.RWMutex } var cacheContextKey = struct{}{} func WithCacheContext(ctx context.Context) context.Context { return context.WithValue(ctx, cacheContextKey, &cacheContext{ ctx: ctx, data: make(map[any]map[any]any), }) } ``` Then you can use the below 4 methods to read/write/del the data within the same context. ```go func GetContextData(ctx context.Context, tp, key any) any func SetContextData(ctx context.Context, tp, key, value any) func RemoveContextData(ctx context.Context, tp, key any) func GetWithContextCache[T any](ctx context.Context, cacheGroupKey string, cacheTargetID any, f func() (T, error)) (T, error) ``` Then let's take a look at how `system.GetString` implement it. ```go func GetSetting(ctx context.Context, key string) (string, error) { return cache.GetWithContextCache(ctx, contextCacheKey, key, func() (string, error) { return cache.GetString(genSettingCacheKey(key), func() (string, error) { res, err := GetSettingNoCache(ctx, key) if err != nil { return "", err } return res.SettingValue, nil }) }) } ``` First, it will check if context data include the setting object with the key. If not, it will query from the global cache which may be memory or a Redis cache. If not, it will get the object from the database. In the end, if the object gets from the global cache or database, it will be set into the context cache. An object stored in the context cache will only be destroyed after the context disappeared.
1 year ago
Add context cache as a request level cache (#22294) To avoid duplicated load of the same data in an HTTP request, we can set a context cache to do that. i.e. Some pages may load a user from a database with the same id in different areas on the same page. But the code is hidden in two different deep logic. How should we share the user? As a result of this PR, now if both entry functions accept `context.Context` as the first parameter and we just need to refactor `GetUserByID` to reuse the user from the context cache. Then it will not be loaded twice on an HTTP request. But of course, sometimes we would like to reload an object from the database, that's why `RemoveContextData` is also exposed. The core context cache is here. It defines a new context ```go type cacheContext struct { ctx context.Context data map[any]map[any]any lock sync.RWMutex } var cacheContextKey = struct{}{} func WithCacheContext(ctx context.Context) context.Context { return context.WithValue(ctx, cacheContextKey, &cacheContext{ ctx: ctx, data: make(map[any]map[any]any), }) } ``` Then you can use the below 4 methods to read/write/del the data within the same context. ```go func GetContextData(ctx context.Context, tp, key any) any func SetContextData(ctx context.Context, tp, key, value any) func RemoveContextData(ctx context.Context, tp, key any) func GetWithContextCache[T any](ctx context.Context, cacheGroupKey string, cacheTargetID any, f func() (T, error)) (T, error) ``` Then let's take a look at how `system.GetString` implement it. ```go func GetSetting(ctx context.Context, key string) (string, error) { return cache.GetWithContextCache(ctx, contextCacheKey, key, func() (string, error) { return cache.GetString(genSettingCacheKey(key), func() (string, error) { res, err := GetSettingNoCache(ctx, key) if err != nil { return "", err } return res.SettingValue, nil }) }) } ``` First, it will check if context data include the setting object with the key. If not, it will query from the global cache which may be memory or a Redis cache. If not, it will get the object from the database. In the end, if the object gets from the global cache or database, it will be set into the context cache. An object stored in the context cache will only be destroyed after the context disappeared.
1 year ago
Add context cache as a request level cache (#22294) To avoid duplicated load of the same data in an HTTP request, we can set a context cache to do that. i.e. Some pages may load a user from a database with the same id in different areas on the same page. But the code is hidden in two different deep logic. How should we share the user? As a result of this PR, now if both entry functions accept `context.Context` as the first parameter and we just need to refactor `GetUserByID` to reuse the user from the context cache. Then it will not be loaded twice on an HTTP request. But of course, sometimes we would like to reload an object from the database, that's why `RemoveContextData` is also exposed. The core context cache is here. It defines a new context ```go type cacheContext struct { ctx context.Context data map[any]map[any]any lock sync.RWMutex } var cacheContextKey = struct{}{} func WithCacheContext(ctx context.Context) context.Context { return context.WithValue(ctx, cacheContextKey, &cacheContext{ ctx: ctx, data: make(map[any]map[any]any), }) } ``` Then you can use the below 4 methods to read/write/del the data within the same context. ```go func GetContextData(ctx context.Context, tp, key any) any func SetContextData(ctx context.Context, tp, key, value any) func RemoveContextData(ctx context.Context, tp, key any) func GetWithContextCache[T any](ctx context.Context, cacheGroupKey string, cacheTargetID any, f func() (T, error)) (T, error) ``` Then let's take a look at how `system.GetString` implement it. ```go func GetSetting(ctx context.Context, key string) (string, error) { return cache.GetWithContextCache(ctx, contextCacheKey, key, func() (string, error) { return cache.GetString(genSettingCacheKey(key), func() (string, error) { res, err := GetSettingNoCache(ctx, key) if err != nil { return "", err } return res.SettingValue, nil }) }) } ``` First, it will check if context data include the setting object with the key. If not, it will query from the global cache which may be memory or a Redis cache. If not, it will get the object from the database. In the end, if the object gets from the global cache or database, it will be set into the context cache. An object stored in the context cache will only be destroyed after the context disappeared.
1 year ago
Add context cache as a request level cache (#22294) To avoid duplicated load of the same data in an HTTP request, we can set a context cache to do that. i.e. Some pages may load a user from a database with the same id in different areas on the same page. But the code is hidden in two different deep logic. How should we share the user? As a result of this PR, now if both entry functions accept `context.Context` as the first parameter and we just need to refactor `GetUserByID` to reuse the user from the context cache. Then it will not be loaded twice on an HTTP request. But of course, sometimes we would like to reload an object from the database, that's why `RemoveContextData` is also exposed. The core context cache is here. It defines a new context ```go type cacheContext struct { ctx context.Context data map[any]map[any]any lock sync.RWMutex } var cacheContextKey = struct{}{} func WithCacheContext(ctx context.Context) context.Context { return context.WithValue(ctx, cacheContextKey, &cacheContext{ ctx: ctx, data: make(map[any]map[any]any), }) } ``` Then you can use the below 4 methods to read/write/del the data within the same context. ```go func GetContextData(ctx context.Context, tp, key any) any func SetContextData(ctx context.Context, tp, key, value any) func RemoveContextData(ctx context.Context, tp, key any) func GetWithContextCache[T any](ctx context.Context, cacheGroupKey string, cacheTargetID any, f func() (T, error)) (T, error) ``` Then let's take a look at how `system.GetString` implement it. ```go func GetSetting(ctx context.Context, key string) (string, error) { return cache.GetWithContextCache(ctx, contextCacheKey, key, func() (string, error) { return cache.GetString(genSettingCacheKey(key), func() (string, error) { res, err := GetSettingNoCache(ctx, key) if err != nil { return "", err } return res.SettingValue, nil }) }) } ``` First, it will check if context data include the setting object with the key. If not, it will query from the global cache which may be memory or a Redis cache. If not, it will get the object from the database. In the end, if the object gets from the global cache or database, it will be set into the context cache. An object stored in the context cache will only be destroyed after the context disappeared.
1 year ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536
  1. // Copyright 2021 The Gitea Authors. All rights reserved.
  2. // SPDX-License-Identifier: MIT
  3. package asymkey
  4. import (
  5. "context"
  6. "fmt"
  7. "hash"
  8. "strings"
  9. "code.gitea.io/gitea/models/db"
  10. repo_model "code.gitea.io/gitea/models/repo"
  11. user_model "code.gitea.io/gitea/models/user"
  12. "code.gitea.io/gitea/modules/git"
  13. "code.gitea.io/gitea/modules/log"
  14. "code.gitea.io/gitea/modules/setting"
  15. "github.com/keybase/go-crypto/openpgp/packet"
  16. )
  17. // __________________ ________ ____ __.
  18. // / _____/\______ \/ _____/ | |/ _|____ ___.__.
  19. // / \ ___ | ___/ \ ___ | <_/ __ < | |
  20. // \ \_\ \| | \ \_\ \ | | \ ___/\___ |
  21. // \______ /|____| \______ / |____|__ \___ > ____|
  22. // \/ \/ \/ \/\/
  23. // _________ .__ __
  24. // \_ ___ \ ____ _____ _____ |__|/ |_
  25. // / \ \/ / _ \ / \ / \| \ __\
  26. // \ \___( <_> ) Y Y \ Y Y \ || |
  27. // \______ /\____/|__|_| /__|_| /__||__|
  28. // \/ \/ \/
  29. // ____ ____ .__ _____.__ __ .__
  30. // \ \ / /___________|__|/ ____\__| ____ _____ _/ |_|__| ____ ____
  31. // \ Y // __ \_ __ \ \ __\| |/ ___\\__ \\ __\ |/ _ \ / \
  32. // \ /\ ___/| | \/ || | | \ \___ / __ \| | | ( <_> ) | \
  33. // \___/ \___ >__| |__||__| |__|\___ >____ /__| |__|\____/|___| /
  34. // \/ \/ \/ \/
  35. // This file provides functions relating commit verification
  36. // CommitVerification represents a commit validation of signature
  37. type CommitVerification struct {
  38. Verified bool
  39. Warning bool
  40. Reason string
  41. SigningUser *user_model.User
  42. CommittingUser *user_model.User
  43. SigningEmail string
  44. SigningKey *GPGKey
  45. SigningSSHKey *PublicKey
  46. TrustStatus string
  47. }
  48. // SignCommit represents a commit with validation of signature.
  49. type SignCommit struct {
  50. Verification *CommitVerification
  51. *user_model.UserCommit
  52. }
  53. const (
  54. // BadSignature is used as the reason when the signature has a KeyID that is in the db
  55. // but no key that has that ID verifies the signature. This is a suspicious failure.
  56. BadSignature = "gpg.error.probable_bad_signature"
  57. // BadDefaultSignature is used as the reason when the signature has a KeyID that matches the
  58. // default Key but is not verified by the default key. This is a suspicious failure.
  59. BadDefaultSignature = "gpg.error.probable_bad_default_signature"
  60. // NoKeyFound is used as the reason when no key can be found to verify the signature.
  61. NoKeyFound = "gpg.error.no_gpg_keys_found"
  62. )
  63. // ParseCommitsWithSignature checks if signaute of commits are corresponding to users gpg keys.
  64. func ParseCommitsWithSignature(ctx context.Context, oldCommits []*user_model.UserCommit, repoTrustModel repo_model.TrustModelType, isOwnerMemberCollaborator func(*user_model.User) (bool, error)) []*SignCommit {
  65. newCommits := make([]*SignCommit, 0, len(oldCommits))
  66. keyMap := map[string]bool{}
  67. for _, c := range oldCommits {
  68. signCommit := &SignCommit{
  69. UserCommit: c,
  70. Verification: ParseCommitWithSignature(ctx, c.Commit),
  71. }
  72. _ = CalculateTrustStatus(signCommit.Verification, repoTrustModel, isOwnerMemberCollaborator, &keyMap)
  73. newCommits = append(newCommits, signCommit)
  74. }
  75. return newCommits
  76. }
  77. // ParseCommitWithSignature check if signature is good against keystore.
  78. func ParseCommitWithSignature(ctx context.Context, c *git.Commit) *CommitVerification {
  79. var committer *user_model.User
  80. if c.Committer != nil {
  81. var err error
  82. // Find Committer account
  83. committer, err = user_model.GetUserByEmail(ctx, c.Committer.Email) // This finds the user by primary email or activated email so commit will not be valid if email is not
  84. if err != nil { // Skipping not user for committer
  85. committer = &user_model.User{
  86. Name: c.Committer.Name,
  87. Email: c.Committer.Email,
  88. }
  89. // We can expect this to often be an ErrUserNotExist. in the case
  90. // it is not, however, it is important to log it.
  91. if !user_model.IsErrUserNotExist(err) {
  92. log.Error("GetUserByEmail: %v", err)
  93. return &CommitVerification{
  94. CommittingUser: committer,
  95. Verified: false,
  96. Reason: "gpg.error.no_committer_account",
  97. }
  98. }
  99. }
  100. }
  101. // If no signature just report the committer
  102. if c.Signature == nil {
  103. return &CommitVerification{
  104. CommittingUser: committer,
  105. Verified: false, // Default value
  106. Reason: "gpg.error.not_signed_commit", // Default value
  107. }
  108. }
  109. // If this a SSH signature handle it differently
  110. if strings.HasPrefix(c.Signature.Signature, "-----BEGIN SSH SIGNATURE-----") {
  111. return ParseCommitWithSSHSignature(ctx, c, committer)
  112. }
  113. // Parsing signature
  114. sig, err := extractSignature(c.Signature.Signature)
  115. if err != nil { // Skipping failed to extract sign
  116. log.Error("SignatureRead err: %v", err)
  117. return &CommitVerification{
  118. CommittingUser: committer,
  119. Verified: false,
  120. Reason: "gpg.error.extract_sign",
  121. }
  122. }
  123. keyID := tryGetKeyIDFromSignature(sig)
  124. defaultReason := NoKeyFound
  125. // First check if the sig has a keyID and if so just look at that
  126. if commitVerification := hashAndVerifyForKeyID(
  127. ctx,
  128. sig,
  129. c.Signature.Payload,
  130. committer,
  131. keyID,
  132. setting.AppName,
  133. ""); commitVerification != nil {
  134. if commitVerification.Reason == BadSignature {
  135. defaultReason = BadSignature
  136. } else {
  137. return commitVerification
  138. }
  139. }
  140. // Now try to associate the signature with the committer, if present
  141. if committer.ID != 0 {
  142. keys, err := db.Find[GPGKey](ctx, FindGPGKeyOptions{
  143. OwnerID: committer.ID,
  144. })
  145. if err != nil { // Skipping failed to get gpg keys of user
  146. log.Error("ListGPGKeys: %v", err)
  147. return &CommitVerification{
  148. CommittingUser: committer,
  149. Verified: false,
  150. Reason: "gpg.error.failed_retrieval_gpg_keys",
  151. }
  152. }
  153. if err := GPGKeyList(keys).LoadSubKeys(ctx); err != nil {
  154. log.Error("LoadSubKeys: %v", err)
  155. return &CommitVerification{
  156. CommittingUser: committer,
  157. Verified: false,
  158. Reason: "gpg.error.failed_retrieval_gpg_keys",
  159. }
  160. }
  161. committerEmailAddresses, _ := user_model.GetEmailAddresses(ctx, committer.ID)
  162. activated := false
  163. for _, e := range committerEmailAddresses {
  164. if e.IsActivated && strings.EqualFold(e.Email, c.Committer.Email) {
  165. activated = true
  166. break
  167. }
  168. }
  169. for _, k := range keys {
  170. // Pre-check (& optimization) that emails attached to key can be attached to the committer email and can validate
  171. canValidate := false
  172. email := ""
  173. if k.Verified && activated {
  174. canValidate = true
  175. email = c.Committer.Email
  176. }
  177. if !canValidate {
  178. for _, e := range k.Emails {
  179. if e.IsActivated && strings.EqualFold(e.Email, c.Committer.Email) {
  180. canValidate = true
  181. email = e.Email
  182. break
  183. }
  184. }
  185. }
  186. if !canValidate {
  187. continue // Skip this key
  188. }
  189. commitVerification := hashAndVerifyWithSubKeysCommitVerification(sig, c.Signature.Payload, k, committer, committer, email)
  190. if commitVerification != nil {
  191. return commitVerification
  192. }
  193. }
  194. }
  195. if setting.Repository.Signing.SigningKey != "" && setting.Repository.Signing.SigningKey != "default" && setting.Repository.Signing.SigningKey != "none" {
  196. // OK we should try the default key
  197. gpgSettings := git.GPGSettings{
  198. Sign: true,
  199. KeyID: setting.Repository.Signing.SigningKey,
  200. Name: setting.Repository.Signing.SigningName,
  201. Email: setting.Repository.Signing.SigningEmail,
  202. }
  203. if err := gpgSettings.LoadPublicKeyContent(); err != nil {
  204. log.Error("Error getting default signing key: %s %v", gpgSettings.KeyID, err)
  205. } else if commitVerification := verifyWithGPGSettings(ctx, &gpgSettings, sig, c.Signature.Payload, committer, keyID); commitVerification != nil {
  206. if commitVerification.Reason == BadSignature {
  207. defaultReason = BadSignature
  208. } else {
  209. return commitVerification
  210. }
  211. }
  212. }
  213. defaultGPGSettings, err := c.GetRepositoryDefaultPublicGPGKey(false)
  214. if err != nil {
  215. log.Error("Error getting default public gpg key: %v", err)
  216. } else if defaultGPGSettings == nil {
  217. log.Warn("Unable to get defaultGPGSettings for unattached commit: %s", c.ID.String())
  218. } else if defaultGPGSettings.Sign {
  219. if commitVerification := verifyWithGPGSettings(ctx, defaultGPGSettings, sig, c.Signature.Payload, committer, keyID); commitVerification != nil {
  220. if commitVerification.Reason == BadSignature {
  221. defaultReason = BadSignature
  222. } else {
  223. return commitVerification
  224. }
  225. }
  226. }
  227. return &CommitVerification{ // Default at this stage
  228. CommittingUser: committer,
  229. Verified: false,
  230. Warning: defaultReason != NoKeyFound,
  231. Reason: defaultReason,
  232. SigningKey: &GPGKey{
  233. KeyID: keyID,
  234. },
  235. }
  236. }
  237. func verifyWithGPGSettings(ctx context.Context, gpgSettings *git.GPGSettings, sig *packet.Signature, payload string, committer *user_model.User, keyID string) *CommitVerification {
  238. // First try to find the key in the db
  239. if commitVerification := hashAndVerifyForKeyID(ctx, sig, payload, committer, gpgSettings.KeyID, gpgSettings.Name, gpgSettings.Email); commitVerification != nil {
  240. return commitVerification
  241. }
  242. // Otherwise we have to parse the key
  243. ekeys, err := checkArmoredGPGKeyString(gpgSettings.PublicKeyContent)
  244. if err != nil {
  245. log.Error("Unable to get default signing key: %v", err)
  246. return &CommitVerification{
  247. CommittingUser: committer,
  248. Verified: false,
  249. Reason: "gpg.error.generate_hash",
  250. }
  251. }
  252. for _, ekey := range ekeys {
  253. pubkey := ekey.PrimaryKey
  254. content, err := base64EncPubKey(pubkey)
  255. if err != nil {
  256. return &CommitVerification{
  257. CommittingUser: committer,
  258. Verified: false,
  259. Reason: "gpg.error.generate_hash",
  260. }
  261. }
  262. k := &GPGKey{
  263. Content: content,
  264. CanSign: pubkey.CanSign(),
  265. KeyID: pubkey.KeyIdString(),
  266. }
  267. for _, subKey := range ekey.Subkeys {
  268. content, err := base64EncPubKey(subKey.PublicKey)
  269. if err != nil {
  270. return &CommitVerification{
  271. CommittingUser: committer,
  272. Verified: false,
  273. Reason: "gpg.error.generate_hash",
  274. }
  275. }
  276. k.SubsKey = append(k.SubsKey, &GPGKey{
  277. Content: content,
  278. CanSign: subKey.PublicKey.CanSign(),
  279. KeyID: subKey.PublicKey.KeyIdString(),
  280. })
  281. }
  282. if commitVerification := hashAndVerifyWithSubKeysCommitVerification(sig, payload, k, committer, &user_model.User{
  283. Name: gpgSettings.Name,
  284. Email: gpgSettings.Email,
  285. }, gpgSettings.Email); commitVerification != nil {
  286. return commitVerification
  287. }
  288. if keyID == k.KeyID {
  289. // This is a bad situation ... We have a key id that matches our default key but the signature doesn't match.
  290. return &CommitVerification{
  291. CommittingUser: committer,
  292. Verified: false,
  293. Warning: true,
  294. Reason: BadSignature,
  295. }
  296. }
  297. }
  298. return nil
  299. }
  300. func verifySign(s *packet.Signature, h hash.Hash, k *GPGKey) error {
  301. // Check if key can sign
  302. if !k.CanSign {
  303. return fmt.Errorf("key can not sign")
  304. }
  305. // Decode key
  306. pkey, err := base64DecPubKey(k.Content)
  307. if err != nil {
  308. return err
  309. }
  310. return pkey.VerifySignature(h, s)
  311. }
  312. func hashAndVerify(sig *packet.Signature, payload string, k *GPGKey) (*GPGKey, error) {
  313. // Generating hash of commit
  314. hash, err := populateHash(sig.Hash, []byte(payload))
  315. if err != nil { // Skipping as failed to generate hash
  316. log.Error("PopulateHash: %v", err)
  317. return nil, err
  318. }
  319. // We will ignore errors in verification as they don't need to be propagated up
  320. err = verifySign(sig, hash, k)
  321. if err != nil {
  322. return nil, nil
  323. }
  324. return k, nil
  325. }
  326. func hashAndVerifyWithSubKeys(sig *packet.Signature, payload string, k *GPGKey) (*GPGKey, error) {
  327. verified, err := hashAndVerify(sig, payload, k)
  328. if err != nil || verified != nil {
  329. return verified, err
  330. }
  331. for _, sk := range k.SubsKey {
  332. verified, err := hashAndVerify(sig, payload, sk)
  333. if err != nil || verified != nil {
  334. return verified, err
  335. }
  336. }
  337. return nil, nil
  338. }
  339. func hashAndVerifyWithSubKeysCommitVerification(sig *packet.Signature, payload string, k *GPGKey, committer, signer *user_model.User, email string) *CommitVerification {
  340. key, err := hashAndVerifyWithSubKeys(sig, payload, k)
  341. if err != nil { // Skipping failed to generate hash
  342. return &CommitVerification{
  343. CommittingUser: committer,
  344. Verified: false,
  345. Reason: "gpg.error.generate_hash",
  346. }
  347. }
  348. if key != nil {
  349. return &CommitVerification{ // Everything is ok
  350. CommittingUser: committer,
  351. Verified: true,
  352. Reason: fmt.Sprintf("%s / %s", signer.Name, key.KeyID),
  353. SigningUser: signer,
  354. SigningKey: key,
  355. SigningEmail: email,
  356. }
  357. }
  358. return nil
  359. }
  360. func hashAndVerifyForKeyID(ctx context.Context, sig *packet.Signature, payload string, committer *user_model.User, keyID, name, email string) *CommitVerification {
  361. if keyID == "" {
  362. return nil
  363. }
  364. keys, err := db.Find[GPGKey](ctx, FindGPGKeyOptions{
  365. KeyID: keyID,
  366. IncludeSubKeys: true,
  367. })
  368. if err != nil {
  369. log.Error("GetGPGKeysByKeyID: %v", err)
  370. return &CommitVerification{
  371. CommittingUser: committer,
  372. Verified: false,
  373. Reason: "gpg.error.failed_retrieval_gpg_keys",
  374. }
  375. }
  376. if len(keys) == 0 {
  377. return nil
  378. }
  379. for _, key := range keys {
  380. var primaryKeys []*GPGKey
  381. if key.PrimaryKeyID != "" {
  382. primaryKeys, err = db.Find[GPGKey](ctx, FindGPGKeyOptions{
  383. KeyID: key.PrimaryKeyID,
  384. IncludeSubKeys: true,
  385. })
  386. if err != nil {
  387. log.Error("GetGPGKeysByKeyID: %v", err)
  388. return &CommitVerification{
  389. CommittingUser: committer,
  390. Verified: false,
  391. Reason: "gpg.error.failed_retrieval_gpg_keys",
  392. }
  393. }
  394. }
  395. activated, email := checkKeyEmails(ctx, email, append([]*GPGKey{key}, primaryKeys...)...)
  396. if !activated {
  397. continue
  398. }
  399. signer := &user_model.User{
  400. Name: name,
  401. Email: email,
  402. }
  403. if key.OwnerID != 0 {
  404. owner, err := user_model.GetUserByID(ctx, key.OwnerID)
  405. if err == nil {
  406. signer = owner
  407. } else if !user_model.IsErrUserNotExist(err) {
  408. log.Error("Failed to user_model.GetUserByID: %d for key ID: %d (%s) %v", key.OwnerID, key.ID, key.KeyID, err)
  409. return &CommitVerification{
  410. CommittingUser: committer,
  411. Verified: false,
  412. Reason: "gpg.error.no_committer_account",
  413. }
  414. }
  415. }
  416. commitVerification := hashAndVerifyWithSubKeysCommitVerification(sig, payload, key, committer, signer, email)
  417. if commitVerification != nil {
  418. return commitVerification
  419. }
  420. }
  421. // This is a bad situation ... We have a key id that is in our database but the signature doesn't match.
  422. return &CommitVerification{
  423. CommittingUser: committer,
  424. Verified: false,
  425. Warning: true,
  426. Reason: BadSignature,
  427. }
  428. }
  429. // CalculateTrustStatus will calculate the TrustStatus for a commit verification within a repository
  430. // There are several trust models in Gitea
  431. func CalculateTrustStatus(verification *CommitVerification, repoTrustModel repo_model.TrustModelType, isOwnerMemberCollaborator func(*user_model.User) (bool, error), keyMap *map[string]bool) error {
  432. if !verification.Verified {
  433. return nil
  434. }
  435. // In the Committer trust model a signature is trusted if it matches the committer
  436. // - it doesn't matter if they're a collaborator, the owner, Gitea or Github
  437. // NB: This model is commit verification only
  438. if repoTrustModel == repo_model.CommitterTrustModel {
  439. // default to "unmatched"
  440. verification.TrustStatus = "unmatched"
  441. // We can only verify against users in our database but the default key will match
  442. // against by email if it is not in the db.
  443. if (verification.SigningUser.ID != 0 &&
  444. verification.CommittingUser.ID == verification.SigningUser.ID) ||
  445. (verification.SigningUser.ID == 0 && verification.CommittingUser.ID == 0 &&
  446. verification.SigningUser.Email == verification.CommittingUser.Email) {
  447. verification.TrustStatus = "trusted"
  448. }
  449. return nil
  450. }
  451. // Now we drop to the more nuanced trust models...
  452. verification.TrustStatus = "trusted"
  453. if verification.SigningUser.ID == 0 {
  454. // This commit is signed by the default key - but this key is not assigned to a user in the DB.
  455. // However in the repo_model.CollaboratorCommitterTrustModel we cannot mark this as trusted
  456. // unless the default key matches the email of a non-user.
  457. if repoTrustModel == repo_model.CollaboratorCommitterTrustModel && (verification.CommittingUser.ID != 0 ||
  458. verification.SigningUser.Email != verification.CommittingUser.Email) {
  459. verification.TrustStatus = "untrusted"
  460. }
  461. return nil
  462. }
  463. // Check we actually have a GPG SigningKey
  464. var err error
  465. if verification.SigningKey != nil {
  466. var isMember bool
  467. if keyMap != nil {
  468. var has bool
  469. isMember, has = (*keyMap)[verification.SigningKey.KeyID]
  470. if !has {
  471. isMember, err = isOwnerMemberCollaborator(verification.SigningUser)
  472. (*keyMap)[verification.SigningKey.KeyID] = isMember
  473. }
  474. } else {
  475. isMember, err = isOwnerMemberCollaborator(verification.SigningUser)
  476. }
  477. if !isMember {
  478. verification.TrustStatus = "untrusted"
  479. if verification.CommittingUser.ID != verification.SigningUser.ID {
  480. // The committing user and the signing user are not the same
  481. // This should be marked as questionable unless the signing user is a collaborator/team member etc.
  482. verification.TrustStatus = "unmatched"
  483. }
  484. } else if repoTrustModel == repo_model.CollaboratorCommitterTrustModel && verification.CommittingUser.ID != verification.SigningUser.ID {
  485. // The committing user and the signing user are not the same and our trustmodel states that they must match
  486. verification.TrustStatus = "unmatched"
  487. }
  488. }
  489. return err
  490. }