summaryrefslogtreecommitdiffstats
path: root/models
diff options
context:
space:
mode:
authorJason Song <i@wolfogre.com>2023-01-31 09:45:19 +0800
committerGitHub <noreply@github.com>2023-01-31 09:45:19 +0800
commit4011821c946e8db032be86266dd9364ccb204118 (patch)
treea8a1cf1b8f088df583f316c8233bc18a89881099 /models
parentb5b3e0714e624cea3ce4d5368aa1266f7639d0eb (diff)
downloadgitea-4011821c946e8db032be86266dd9364ccb204118.tar.gz
gitea-4011821c946e8db032be86266dd9364ccb204118.zip
Implement actions (#21937)
Close #13539. Co-authored by: @lunny @appleboy @fuxiaohei and others. Related projects: - https://gitea.com/gitea/actions-proto-def - https://gitea.com/gitea/actions-proto-go - https://gitea.com/gitea/act - https://gitea.com/gitea/act_runner ### Summary The target of this PR is to bring a basic implementation of "Actions", an internal CI/CD system of Gitea. That means even though it has been merged, the state of the feature is **EXPERIMENTAL**, and please note that: - It is disabled by default; - It shouldn't be used in a production environment currently; - It shouldn't be used in a public Gitea instance currently; - Breaking changes may be made before it's stable. **Please comment on #13539 if you have any different product design ideas**, all decisions reached there will be adopted here. But in this PR, we don't talk about **naming, feature-creep or alternatives**. ### ⚠️ Breaking `gitea-actions` will become a reserved user name. If a user with the name already exists in the database, it is recommended to rename it. ### Some important reviews - What is `DEFAULT_ACTIONS_URL` in `app.ini` for? - https://github.com/go-gitea/gitea/pull/21937#discussion_r1055954954 - Why the api for runners is not under the normal `/api/v1` prefix? - https://github.com/go-gitea/gitea/pull/21937#discussion_r1061173592 - Why DBFS? - https://github.com/go-gitea/gitea/pull/21937#discussion_r1061301178 - Why ignore events triggered by `gitea-actions` bot? - https://github.com/go-gitea/gitea/pull/21937#discussion_r1063254103 - Why there's no permission control for actions? - https://github.com/go-gitea/gitea/pull/21937#discussion_r1090229868 ### What it looks like <details> #### Manage runners <img width="1792" alt="image" src="https://user-images.githubusercontent.com/9418365/205870657-c72f590e-2e08-4cd4-be7f-2e0abb299bbf.png"> #### List runs <img width="1792" alt="image" src="https://user-images.githubusercontent.com/9418365/205872794-50fde990-2b45-48c1-a178-908e4ec5b627.png"> #### View logs <img width="1792" alt="image" src="https://user-images.githubusercontent.com/9418365/205872501-9b7b9000-9542-4991-8f55-18ccdada77c3.png"> </details> ### How to try it <details> #### 1. Start Gitea Clone this branch and [install from source](https://docs.gitea.io/en-us/install-from-source). Add additional configurations in `app.ini` to enable Actions: ```ini [actions] ENABLED = true ``` Start it. If all is well, you'll see the management page of runners: <img width="1792" alt="image" src="https://user-images.githubusercontent.com/9418365/205877365-8e30a780-9b10-4154-b3e8-ee6c3cb35a59.png"> #### 2. Start runner Clone the [act_runner](https://gitea.com/gitea/act_runner), and follow the [README](https://gitea.com/gitea/act_runner/src/branch/main/README.md) to start it. If all is well, you'll see a new runner has been added: <img width="1792" alt="image" src="https://user-images.githubusercontent.com/9418365/205878000-216f5937-e696-470d-b66c-8473987d91c3.png"> #### 3. Enable actions for a repo Create a new repo or open an existing one, check the `Actions` checkbox in settings and submit. <img width="1792" alt="image" src="https://user-images.githubusercontent.com/9418365/205879705-53e09208-73c0-4b3e-a123-2dcf9aba4b9c.png"> <img width="1792" alt="image" src="https://user-images.githubusercontent.com/9418365/205879383-23f3d08f-1a85-41dd-a8b3-54e2ee6453e8.png"> If all is well, you'll see a new tab "Actions": <img width="1792" alt="image" src="https://user-images.githubusercontent.com/9418365/205881648-a8072d8c-5803-4d76-b8a8-9b2fb49516c1.png"> #### 4. Upload workflow files Upload some workflow files to `.gitea/workflows/xxx.yaml`, you can follow the [quickstart](https://docs.github.com/en/actions/quickstart) of GitHub Actions. Yes, Gitea Actions is compatible with GitHub Actions in most cases, you can use the same demo: ```yaml name: GitHub Actions Demo run-name: ${{ github.actor }} is testing out GitHub Actions 🚀 on: [push] jobs: Explore-GitHub-Actions: runs-on: ubuntu-latest steps: - run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event." - run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!" - run: echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}." - name: Check out repository code uses: actions/checkout@v3 - run: echo "💡 The ${{ github.repository }} repository has been cloned to the runner." - run: echo "🖥️ The workflow is now ready to test your code on the runner." - name: List files in the repository run: | ls ${{ github.workspace }} - run: echo "🍏 This job's status is ${{ job.status }}." ``` If all is well, you'll see a new run in `Actions` tab: <img width="1792" alt="image" src="https://user-images.githubusercontent.com/9418365/205884473-79a874bc-171b-4aaf-acd5-0241a45c3b53.png"> #### 5. Check the logs of jobs Click a run and you'll see the logs: <img width="1792" alt="image" src="https://user-images.githubusercontent.com/9418365/205884800-994b0374-67f7-48ff-be9a-4c53f3141547.png"> #### 6. Go on You can try more examples in [the documents](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions) of GitHub Actions, then you might find a lot of bugs. Come on, PRs are welcome. </details> See also: [Feature Preview: Gitea Actions](https://blog.gitea.io/2022/12/feature-preview-gitea-actions/) --------- Co-authored-by: a1012112796 <1012112796@qq.com> Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: delvh <dev.lh@web.de> Co-authored-by: ChristopherHX <christopher.homberger@web.de> Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Diffstat (limited to 'models')
-rw-r--r--models/actions/run.go254
-rw-r--r--models/actions/run_job.go163
-rw-r--r--models/actions/run_job_list.go99
-rw-r--r--models/actions/run_list.go107
-rw-r--r--models/actions/runner.go252
-rw-r--r--models/actions/runner_list.go77
-rw-r--r--models/actions/runner_token.go86
-rw-r--r--models/actions/status.go100
-rw-r--r--models/actions/task.go504
-rw-r--r--models/actions/task_list.go105
-rw-r--r--models/actions/task_step.go41
-rw-r--r--models/actions/utils.go84
-rw-r--r--models/actions/utils_test.go90
-rw-r--r--models/dbfs/dbfile.go357
-rw-r--r--models/dbfs/dbfs.go73
-rw-r--r--models/dbfs/dbfs_test.go179
-rw-r--r--models/dbfs/main_test.go23
-rw-r--r--models/issues/comment.go2
-rw-r--r--models/issues/comment_list.go27
-rw-r--r--models/issues/issue.go2
-rw-r--r--models/issues/issue_list.go37
-rw-r--r--models/issues/pull.go5
-rw-r--r--models/issues/review.go2
-rw-r--r--models/migrations/migrations.go2
-rw-r--r--models/migrations/v1_19/v240.go176
-rw-r--r--models/repo.go22
-rw-r--r--models/repo/repo.go4
-rw-r--r--models/repo/repo_unit.go2
-rw-r--r--models/unit/unit.go14
-rw-r--r--models/unittest/testdb.go2
-rw-r--r--models/user/user.go41
-rw-r--r--models/user/user_system.go64
32 files changed, 2932 insertions, 64 deletions
diff --git a/models/actions/run.go b/models/actions/run.go
new file mode 100644
index 0000000000..2b748bb0d5
--- /dev/null
+++ b/models/actions/run.go
@@ -0,0 +1,254 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "context"
+ "fmt"
+ "time"
+
+ "code.gitea.io/gitea/models/db"
+ repo_model "code.gitea.io/gitea/models/repo"
+ user_model "code.gitea.io/gitea/models/user"
+ "code.gitea.io/gitea/modules/json"
+ api "code.gitea.io/gitea/modules/structs"
+ "code.gitea.io/gitea/modules/timeutil"
+ "code.gitea.io/gitea/modules/util"
+ webhook_module "code.gitea.io/gitea/modules/webhook"
+
+ "github.com/nektos/act/pkg/jobparser"
+ "xorm.io/builder"
+)
+
+// ActionRun represents a run of a workflow file
+type ActionRun struct {
+ ID int64
+ Title string
+ RepoID int64 `xorm:"index unique(repo_index)"`
+ Repo *repo_model.Repository `xorm:"-"`
+ OwnerID int64 `xorm:"index"`
+ WorkflowID string `xorm:"index"` // the name of workflow file
+ Index int64 `xorm:"index unique(repo_index)"` // a unique number for each run of a repository
+ TriggerUserID int64
+ TriggerUser *user_model.User `xorm:"-"`
+ Ref string
+ CommitSHA string
+ IsForkPullRequest bool
+ Event webhook_module.HookEventType
+ EventPayload string `xorm:"LONGTEXT"`
+ Status Status `xorm:"index"`
+ Started timeutil.TimeStamp
+ Stopped timeutil.TimeStamp
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated"`
+}
+
+func init() {
+ db.RegisterModel(new(ActionRun))
+ db.RegisterModel(new(ActionRunIndex))
+}
+
+func (run *ActionRun) HTMLURL() string {
+ if run.Repo == nil {
+ return ""
+ }
+ return fmt.Sprintf("%s/actions/runs/%d", run.Repo.HTMLURL(), run.Index)
+}
+
+func (run *ActionRun) Link() string {
+ if run.Repo == nil {
+ return ""
+ }
+ return fmt.Sprintf("%s/actions/runs/%d", run.Repo.Link(), run.Index)
+}
+
+// LoadAttributes load Repo TriggerUser if not loaded
+func (run *ActionRun) LoadAttributes(ctx context.Context) error {
+ if run == nil {
+ return nil
+ }
+
+ if run.Repo == nil {
+ repo, err := repo_model.GetRepositoryByID(ctx, run.RepoID)
+ if err != nil {
+ return err
+ }
+ run.Repo = repo
+ }
+ if err := run.Repo.LoadAttributes(ctx); err != nil {
+ return err
+ }
+
+ if run.TriggerUser == nil {
+ u, err := user_model.GetPossibleUserByID(ctx, run.TriggerUserID)
+ if err != nil {
+ return err
+ }
+ run.TriggerUser = u
+ }
+
+ return nil
+}
+
+func (run *ActionRun) Duration() time.Duration {
+ return calculateDuration(run.Started, run.Stopped, run.Status)
+}
+
+func (run *ActionRun) GetPushEventPayload() (*api.PushPayload, error) {
+ if run.Event == webhook_module.HookEventPush {
+ var payload api.PushPayload
+ if err := json.Unmarshal([]byte(run.EventPayload), &payload); err != nil {
+ return nil, err
+ }
+ return &payload, nil
+ }
+ return nil, fmt.Errorf("event %s is not a push event", run.Event)
+}
+
+func updateRepoRunsNumbers(ctx context.Context, repo *repo_model.Repository) error {
+ _, err := db.GetEngine(ctx).ID(repo.ID).
+ SetExpr("num_action_runs",
+ builder.Select("count(*)").From("action_run").
+ Where(builder.Eq{"repo_id": repo.ID}),
+ ).
+ SetExpr("num_closed_action_runs",
+ builder.Select("count(*)").From("action_run").
+ Where(builder.Eq{
+ "repo_id": repo.ID,
+ }.And(
+ builder.In("status",
+ StatusSuccess,
+ StatusFailure,
+ StatusCancelled,
+ StatusSkipped,
+ ),
+ ),
+ ),
+ ).
+ Update(repo)
+ return err
+}
+
+// InsertRun inserts a run
+func InsertRun(ctx context.Context, run *ActionRun, jobs []*jobparser.SingleWorkflow) error {
+ ctx, commiter, err := db.TxContext(ctx)
+ if err != nil {
+ return err
+ }
+ defer commiter.Close()
+
+ index, err := db.GetNextResourceIndex(ctx, "action_run_index", run.RepoID)
+ if err != nil {
+ return err
+ }
+ run.Index = index
+
+ if run.Status.IsUnknown() {
+ run.Status = StatusWaiting
+ }
+
+ if err := db.Insert(ctx, run); err != nil {
+ return err
+ }
+
+ if run.Repo == nil {
+ repo, err := repo_model.GetRepositoryByID(ctx, run.RepoID)
+ if err != nil {
+ return err
+ }
+ run.Repo = repo
+ }
+
+ if err := updateRepoRunsNumbers(ctx, run.Repo); err != nil {
+ return err
+ }
+
+ runJobs := make([]*ActionRunJob, 0, len(jobs))
+ for _, v := range jobs {
+ id, job := v.Job()
+ needs := job.Needs()
+ job.EraseNeeds()
+ payload, _ := v.Marshal()
+ status := StatusWaiting
+ if len(needs) > 0 {
+ status = StatusBlocked
+ }
+ runJobs = append(runJobs, &ActionRunJob{
+ RunID: run.ID,
+ RepoID: run.RepoID,
+ OwnerID: run.OwnerID,
+ CommitSHA: run.CommitSHA,
+ IsForkPullRequest: run.IsForkPullRequest,
+ Name: job.Name,
+ WorkflowPayload: payload,
+ JobID: id,
+ Needs: needs,
+ RunsOn: job.RunsOn(),
+ Status: status,
+ })
+ }
+ if err := db.Insert(ctx, runJobs); err != nil {
+ return err
+ }
+
+ return commiter.Commit()
+}
+
+func GetRunByID(ctx context.Context, id int64) (*ActionRun, error) {
+ var run ActionRun
+ has, err := db.GetEngine(ctx).Where("id=?", id).Get(&run)
+ if err != nil {
+ return nil, err
+ } else if !has {
+ return nil, fmt.Errorf("run with id %d: %w", id, util.ErrNotExist)
+ }
+
+ return &run, nil
+}
+
+func GetRunByIndex(ctx context.Context, repoID, index int64) (*ActionRun, error) {
+ run := &ActionRun{
+ RepoID: repoID,
+ Index: index,
+ }
+ has, err := db.GetEngine(ctx).Get(run)
+ if err != nil {
+ return nil, err
+ } else if !has {
+ return nil, fmt.Errorf("run with index %d %d: %w", repoID, index, util.ErrNotExist)
+ }
+
+ return run, nil
+}
+
+func UpdateRun(ctx context.Context, run *ActionRun, cols ...string) error {
+ sess := db.GetEngine(ctx).ID(run.ID)
+ if len(cols) > 0 {
+ sess.Cols(cols...)
+ }
+ _, err := sess.Update(run)
+
+ if run.Status != 0 || util.SliceContains(cols, "status") {
+ if run.RepoID == 0 {
+ run, err = GetRunByID(ctx, run.ID)
+ if err != nil {
+ return err
+ }
+ }
+ if run.Repo == nil {
+ repo, err := repo_model.GetRepositoryByID(ctx, run.RepoID)
+ if err != nil {
+ return err
+ }
+ run.Repo = repo
+ }
+ if err := updateRepoRunsNumbers(ctx, run.Repo); err != nil {
+ return err
+ }
+ }
+
+ return err
+}
+
+type ActionRunIndex db.ResourceIndex
diff --git a/models/actions/run_job.go b/models/actions/run_job.go
new file mode 100644
index 0000000000..0002e50770
--- /dev/null
+++ b/models/actions/run_job.go
@@ -0,0 +1,163 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "context"
+ "fmt"
+ "time"
+
+ "code.gitea.io/gitea/models/db"
+ "code.gitea.io/gitea/modules/timeutil"
+ "code.gitea.io/gitea/modules/util"
+
+ "xorm.io/builder"
+)
+
+// ActionRunJob represents a job of a run
+type ActionRunJob struct {
+ ID int64
+ RunID int64 `xorm:"index"`
+ Run *ActionRun `xorm:"-"`
+ RepoID int64 `xorm:"index"`
+ OwnerID int64 `xorm:"index"`
+ CommitSHA string `xorm:"index"`
+ IsForkPullRequest bool
+ Name string `xorm:"VARCHAR(255)"`
+ Attempt int64
+ WorkflowPayload []byte
+ JobID string `xorm:"VARCHAR(255)"` // job id in workflow, not job's id
+ Needs []string `xorm:"JSON TEXT"`
+ RunsOn []string `xorm:"JSON TEXT"`
+ TaskID int64 // the latest task of the job
+ Status Status `xorm:"index"`
+ Started timeutil.TimeStamp
+ Stopped timeutil.TimeStamp
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated index"`
+}
+
+func init() {
+ db.RegisterModel(new(ActionRunJob))
+}
+
+func (job *ActionRunJob) Duration() time.Duration {
+ return calculateDuration(job.Started, job.Stopped, job.Status)
+}
+
+func (job *ActionRunJob) LoadRun(ctx context.Context) error {
+ if job.Run == nil {
+ run, err := GetRunByID(ctx, job.RunID)
+ if err != nil {
+ return err
+ }
+ job.Run = run
+ }
+ return nil
+}
+
+// LoadAttributes load Run if not loaded
+func (job *ActionRunJob) LoadAttributes(ctx context.Context) error {
+ if job == nil {
+ return nil
+ }
+
+ if err := job.LoadRun(ctx); err != nil {
+ return err
+ }
+
+ return job.Run.LoadAttributes(ctx)
+}
+
+func GetRunJobByID(ctx context.Context, id int64) (*ActionRunJob, error) {
+ var job ActionRunJob
+ has, err := db.GetEngine(ctx).Where("id=?", id).Get(&job)
+ if err != nil {
+ return nil, err
+ } else if !has {
+ return nil, fmt.Errorf("run job with id %d: %w", id, util.ErrNotExist)
+ }
+
+ return &job, nil
+}
+
+func GetRunJobsByRunID(ctx context.Context, runID int64) ([]*ActionRunJob, error) {
+ var jobs []*ActionRunJob
+ if err := db.GetEngine(ctx).Where("run_id=?", runID).OrderBy("id").Find(&jobs); err != nil {
+ return nil, err
+ }
+ return jobs, nil
+}
+
+func UpdateRunJob(ctx context.Context, job *ActionRunJob, cond builder.Cond, cols ...string) (int64, error) {
+ e := db.GetEngine(ctx)
+
+ sess := e.ID(job.ID)
+ if len(cols) > 0 {
+ sess.Cols(cols...)
+ }
+
+ if cond != nil {
+ sess.Where(cond)
+ }
+
+ affected, err := sess.Update(job)
+ if err != nil {
+ return 0, err
+ }
+
+ if affected == 0 || (!util.SliceContains(cols, "status") && job.Status == 0) {
+ return affected, nil
+ }
+
+ if job.RunID == 0 {
+ var err error
+ if job, err = GetRunJobByID(ctx, job.ID); err != nil {
+ return affected, err
+ }
+ }
+
+ jobs, err := GetRunJobsByRunID(ctx, job.RunID)
+ if err != nil {
+ return affected, err
+ }
+
+ runStatus := aggregateJobStatus(jobs)
+
+ run := &ActionRun{
+ ID: job.RunID,
+ Status: runStatus,
+ }
+ if runStatus.IsDone() {
+ run.Stopped = timeutil.TimeStampNow()
+ }
+ return affected, UpdateRun(ctx, run)
+}
+
+func aggregateJobStatus(jobs []*ActionRunJob) Status {
+ allDone := true
+ allWaiting := true
+ hasFailure := false
+ for _, job := range jobs {
+ if !job.Status.IsDone() {
+ allDone = false
+ }
+ if job.Status != StatusWaiting {
+ allWaiting = false
+ }
+ if job.Status == StatusFailure || job.Status == StatusCancelled {
+ hasFailure = true
+ }
+ }
+ if allDone {
+ if hasFailure {
+ return StatusFailure
+ }
+ return StatusSuccess
+ }
+ if allWaiting {
+ return StatusWaiting
+ }
+ return StatusRunning
+}
diff --git a/models/actions/run_job_list.go b/models/actions/run_job_list.go
new file mode 100644
index 0000000000..047bf64410
--- /dev/null
+++ b/models/actions/run_job_list.go
@@ -0,0 +1,99 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "context"
+
+ "code.gitea.io/gitea/models/db"
+ "code.gitea.io/gitea/modules/container"
+ "code.gitea.io/gitea/modules/timeutil"
+
+ "xorm.io/builder"
+)
+
+type ActionJobList []*ActionRunJob
+
+func (jobs ActionJobList) GetRunIDs() []int64 {
+ ids := make(container.Set[int64], len(jobs))
+ for _, j := range jobs {
+ if j.RunID == 0 {
+ continue
+ }
+ ids.Add(j.RunID)
+ }
+ return ids.Values()
+}
+
+func (jobs ActionJobList) LoadRuns(ctx context.Context, withRepo bool) error {
+ runIDs := jobs.GetRunIDs()
+ runs := make(map[int64]*ActionRun, len(runIDs))
+ if err := db.GetEngine(ctx).In("id", runIDs).Find(&runs); err != nil {
+ return err
+ }
+ for _, j := range jobs {
+ if j.RunID > 0 && j.Run == nil {
+ j.Run = runs[j.RunID]
+ }
+ }
+ if withRepo {
+ var runsList RunList = make([]*ActionRun, 0, len(runs))
+ for _, r := range runs {
+ runsList = append(runsList, r)
+ }
+ return runsList.LoadRepos()
+ }
+ return nil
+}
+
+func (jobs ActionJobList) LoadAttributes(ctx context.Context, withRepo bool) error {
+ return jobs.LoadRuns(ctx, withRepo)
+}
+
+type FindRunJobOptions struct {
+ db.ListOptions
+ RunID int64
+ RepoID int64
+ OwnerID int64
+ CommitSHA string
+ Statuses []Status
+ UpdatedBefore timeutil.TimeStamp
+}
+
+func (opts FindRunJobOptions) toConds() builder.Cond {
+ cond := builder.NewCond()
+ if opts.RunID > 0 {
+ cond = cond.And(builder.Eq{"run_id": opts.RunID})
+ }
+ if opts.RepoID > 0 {
+ cond = cond.And(builder.Eq{"repo_id": opts.RepoID})
+ }
+ if opts.OwnerID > 0 {
+ cond = cond.And(builder.Eq{"owner_id": opts.OwnerID})
+ }
+ if opts.CommitSHA != "" {
+ cond = cond.And(builder.Eq{"commit_sha": opts.CommitSHA})
+ }
+ if len(opts.Statuses) > 0 {
+ cond = cond.And(builder.In("status", opts.Statuses))
+ }
+ if opts.UpdatedBefore > 0 {
+ cond = cond.And(builder.Lt{"updated": opts.UpdatedBefore})
+ }
+ return cond
+}
+
+func FindRunJobs(ctx context.Context, opts FindRunJobOptions) (ActionJobList, int64, error) {
+ e := db.GetEngine(ctx).Where(opts.toConds())
+ if opts.PageSize > 0 && opts.Page >= 1 {
+ e.Limit(opts.PageSize, (opts.Page-1)*opts.PageSize)
+ }
+ var tasks ActionJobList
+ total, err := e.FindAndCount(&tasks)
+ return tasks, total, err
+}
+
+func CountRunJobs(ctx context.Context, opts FindRunJobOptions) (int64, error) {
+ return db.GetEngine(ctx).Where(opts.toConds()).Count(new(ActionRunJob))
+}
diff --git a/models/actions/run_list.go b/models/actions/run_list.go
new file mode 100644
index 0000000000..f9d8417227
--- /dev/null
+++ b/models/actions/run_list.go
@@ -0,0 +1,107 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "context"
+
+ "code.gitea.io/gitea/models/db"
+ repo_model "code.gitea.io/gitea/models/repo"
+ user_model "code.gitea.io/gitea/models/user"
+ "code.gitea.io/gitea/modules/container"
+ "code.gitea.io/gitea/modules/util"
+
+ "xorm.io/builder"
+)
+
+type RunList []*ActionRun
+
+// GetUserIDs returns a slice of user's id
+func (runs RunList) GetUserIDs() []int64 {
+ ids := make(container.Set[int64], len(runs))
+ for _, run := range runs {
+ ids.Add(run.TriggerUserID)
+ }
+ return ids.Values()
+}
+
+func (runs RunList) GetRepoIDs() []int64 {
+ ids := make(container.Set[int64], len(runs))
+ for _, run := range runs {
+ ids.Add(run.RepoID)
+ }
+ return ids.Values()
+}
+
+func (runs RunList) LoadTriggerUser(ctx context.Context) error {
+ userIDs := runs.GetUserIDs()
+ users := make(map[int64]*user_model.User, len(userIDs))
+ if err := db.GetEngine(ctx).In("id", userIDs).Find(&users); err != nil {
+ return err
+ }
+ for _, run := range runs {
+ if run.TriggerUserID == user_model.ActionsUserID {
+ run.TriggerUser = user_model.NewActionsUser()
+ } else {
+ run.TriggerUser = users[run.TriggerUserID]
+ }
+ }
+ return nil
+}
+
+func (runs RunList) LoadRepos() error {
+ repoIDs := runs.GetRepoIDs()
+ repos, err := repo_model.GetRepositoriesMapByIDs(repoIDs)
+ if err != nil {
+ return err
+ }
+ for _, run := range runs {
+ run.Repo = repos[run.RepoID]
+ }
+ return nil
+}
+
+type FindRunOptions struct {
+ db.ListOptions
+ RepoID int64
+ OwnerID int64
+ IsClosed util.OptionalBool
+ WorkflowFileName string
+}
+
+func (opts FindRunOptions) toConds() builder.Cond {
+ cond := builder.NewCond()
+ if opts.RepoID > 0 {
+ cond = cond.And(builder.Eq{"repo_id": opts.RepoID})
+ }
+ if opts.OwnerID > 0 {
+ cond = cond.And(builder.Eq{"owner_id": opts.OwnerID})
+ }
+ if opts.IsClosed.IsFalse() {
+ cond = cond.And(builder.Eq{"status": StatusWaiting}.Or(
+ builder.Eq{"status": StatusRunning}))
+ } else if opts.IsClosed.IsTrue() {
+ cond = cond.And(
+ builder.Neq{"status": StatusWaiting}.And(
+ builder.Neq{"status": StatusRunning}))
+ }
+ if opts.WorkflowFileName != "" {
+ cond = cond.And(builder.Eq{"workflow_id": opts.WorkflowFileName})
+ }
+ return cond
+}
+
+func FindRuns(ctx context.Context, opts FindRunOptions) (RunList, int64, error) {
+ e := db.GetEngine(ctx).Where(opts.toConds())
+ if opts.PageSize > 0 && opts.Page >= 1 {
+ e.Limit(opts.PageSize, (opts.Page-1)*opts.PageSize)
+ }
+ var runs RunList
+ total, err := e.Desc("id").FindAndCount(&runs)
+ return runs, total, err
+}
+
+func CountRuns(ctx context.Context, opts FindRunOptions) (int64, error) {
+ return db.GetEngine(ctx).Where(opts.toConds()).Count(new(ActionRun))
+}
diff --git a/models/actions/runner.go b/models/actions/runner.go
new file mode 100644
index 0000000000..4efe105b08
--- /dev/null
+++ b/models/actions/runner.go
@@ -0,0 +1,252 @@
+// Copyright 2021 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "context"
+ "fmt"
+ "strings"
+ "time"
+
+ "code.gitea.io/gitea/models/db"
+ repo_model "code.gitea.io/gitea/models/repo"
+ user_model "code.gitea.io/gitea/models/user"
+ "code.gitea.io/gitea/modules/timeutil"
+ "code.gitea.io/gitea/modules/translation"
+ "code.gitea.io/gitea/modules/util"
+
+ runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
+ "xorm.io/builder"
+)
+
+// ActionRunner represents runner machines
+type ActionRunner struct {
+ ID int64
+ UUID string `xorm:"CHAR(36) UNIQUE"`
+ Name string `xorm:"VARCHAR(255)"`
+ OwnerID int64 `xorm:"index"` // org level runner, 0 means system
+ Owner *user_model.User `xorm:"-"`
+ RepoID int64 `xorm:"index"` // repo level runner, if orgid also is zero, then it's a global
+ Repo *repo_model.Repository `xorm:"-"`
+ Description string `xorm:"TEXT"`
+ Base int // 0 native 1 docker 2 virtual machine
+ RepoRange string // glob match which repositories could use this runner
+
+ Token string `xorm:"-"`
+ TokenHash string `xorm:"UNIQUE"` // sha256 of token
+ TokenSalt string
+ // TokenLastEight string `xorm:"token_last_eight"` // it's unnecessary because we don't find runners by token
+
+ LastOnline timeutil.TimeStamp `xorm:"index"`
+ LastActive timeutil.TimeStamp `xorm:"index"`
+
+ // Store OS and Artch.
+ AgentLabels []string
+ // Store custom labes use defined.
+ CustomLabels []string
+
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated"`
+ Deleted timeutil.TimeStamp `xorm:"deleted"`
+}
+
+func (r *ActionRunner) OwnType() string {
+ if r.RepoID != 0 {
+ return fmt.Sprintf("Repo(%s)", r.Repo.FullName())
+ }
+ if r.OwnerID != 0 {
+ return fmt.Sprintf("Org(%s)", r.Owner.Name)
+ }
+ return "Global"
+}
+
+func (r *ActionRunner) Status() runnerv1.RunnerStatus {
+ if time.Since(r.LastOnline.AsTime()) > time.Minute {
+ return runnerv1.RunnerStatus_RUNNER_STATUS_OFFLINE
+ }
+ if time.Since(r.LastActive.AsTime()) > 10*time.Second {
+ return runnerv1.RunnerStatus_RUNNER_STATUS_IDLE
+ }
+ return runnerv1.RunnerStatus_RUNNER_STATUS_ACTIVE
+}
+
+func (r *ActionRunner) StatusName() string {
+ return strings.ToLower(strings.TrimPrefix(r.Status().String(), "RUNNER_STATUS_"))
+}
+
+func (r *ActionRunner) StatusLocaleName(lang translation.Locale) string {
+ return lang.Tr("actions.runners.status." + r.StatusName())
+}
+
+func (r *ActionRunner) IsOnline() bool {
+ status := r.Status()
+ if status == runnerv1.RunnerStatus_RUNNER_STATUS_IDLE || status == runnerv1.RunnerStatus_RUNNER_STATUS_ACTIVE {
+ return true
+ }
+ return false
+}
+
+// AllLabels returns agent and custom labels
+func (r *ActionRunner) AllLabels() []string {
+ return append(r.AgentLabels, r.CustomLabels...)
+}
+
+// Editable checks if the runner is editable by the user
+func (r *ActionRunner) Editable(ownerID, repoID int64) bool {
+ if ownerID == 0 && repoID == 0 {
+ return true
+ }
+ if ownerID > 0 && r.OwnerID == ownerID {
+ return true
+ }
+ return repoID > 0 && r.RepoID == repoID
+}
+
+// LoadAttributes loads the attributes of the runner
+func (r *ActionRunner) LoadAttributes(ctx context.Context) error {
+ if r.OwnerID > 0 {
+ var user user_model.User
+ has, err := db.GetEngine(ctx).ID(r.OwnerID).Get(&user)
+ if err != nil {
+ return err
+ }
+ if has {
+ r.Owner = &user
+ }
+ }
+ if r.RepoID > 0 {
+ var repo repo_model.Repository
+ has, err := db.GetEngine(ctx).ID(r.RepoID).Get(&repo)
+ if err != nil {
+ return err
+ }
+ if has {
+ r.Repo = &repo
+ }
+ }
+ return nil
+}
+
+func (r *ActionRunner) GenerateToken() (err error) {
+ r.Token, r.TokenSalt, r.TokenHash, _, err = generateSaltedToken()
+ return err
+}
+
+func init() {
+ db.RegisterModel(&ActionRunner{})
+}
+
+type FindRunnerOptions struct {
+ db.ListOptions
+ RepoID int64
+ OwnerID int64
+ Sort string
+ Filter string
+ WithAvailable bool // not only runners belong to, but also runners can be used
+}
+
+func (opts FindRunnerOptions) toCond() builder.Cond {
+ cond := builder.NewCond()
+
+ if opts.RepoID > 0 {
+ c := builder.NewCond().And(builder.Eq{"repo_id": opts.RepoID})
+ if opts.WithAvailable {
+ c = c.Or(builder.Eq{"owner_id": builder.Select("owner_id").From("repository").Where(builder.Eq{"id": opts.RepoID})})
+ c = c.Or(builder.Eq{"repo_id": 0, "owner_id": 0})
+ }
+ cond = cond.And(c)
+ }
+ if opts.OwnerID > 0 {
+ c := builder.NewCond().And(builder.Eq{"owner_id": opts.OwnerID})
+ if opts.WithAvailable {
+ c = c.Or(builder.Eq{"repo_id": 0, "owner_id": 0})
+ }
+ cond = cond.And(c)
+ }
+
+ if opts.Filter != "" {
+ cond = cond.And(builder.Like{"name", opts.Filter})
+ }
+ return cond
+}
+
+func (opts FindRunnerOptions) toOrder() string {
+ switch opts.Sort {
+ case "online":
+ return "last_online DESC"
+ case "offline":
+ return "last_online ASC"
+ case "alphabetically":
+ return "name ASC"
+ }
+ return "last_online DESC"
+}
+
+func CountRunners(ctx context.Context, opts FindRunnerOptions) (int64, error) {
+ return db.GetEngine(ctx).
+ Where(opts.toCond()).
+ Count(ActionRunner{})
+}
+
+func FindRunners(ctx context.Context, opts FindRunnerOptions) (runners RunnerList, err error) {
+ sess := db.GetEngine(ctx).
+ Where(opts.toCond()).
+ OrderBy(opts.toOrder())
+ if opts.Page > 0 {
+ sess.Limit(opts.PageSize, (opts.Page-1)*opts.PageSize)
+ }
+ return runners, sess.Find(&runners)
+}
+
+// GetRunnerByUUID returns a runner via uuid
+func GetRunnerByUUID(ctx context.Context, uuid string) (*ActionRunner, error) {
+ var runner ActionRunner
+ has, err := db.GetEngine(ctx).Where("uuid=?", uuid).Get(&runner)
+ if err != nil {
+ return nil, err
+ } else if !has {
+ return nil, fmt.Errorf("runner with uuid %s: %w", uuid, util.ErrNotExist)
+ }
+ return &runner, nil
+}
+
+// GetRunnerByID returns a runner via id
+func GetRunnerByID(ctx context.Context, id int64) (*ActionRunner, error) {
+ var runner ActionRunner
+ has, err := db.GetEngine(ctx).Where("id=?", id).Get(&runner)
+ if err != nil {
+ return nil, err
+ } else if !has {
+ return nil, fmt.Errorf("runner with id %d: %w", id, util.ErrNotExist)
+ }
+ return &runner, nil
+}
+
+// UpdateRunner updates runner's information.
+func UpdateRunner(ctx context.Context, r *ActionRunner, cols ...string) error {
+ e := db.GetEngine(ctx)
+ var err error
+ if len(cols) == 0 {
+ _, err = e.ID(r.ID).AllCols().Update(r)
+ } else {
+ _, err = e.ID(r.ID).Cols(cols...).Update(r)
+ }
+ return err
+}
+
+// DeleteRunner deletes a runner by given ID.
+func DeleteRunner(ctx context.Context, id int64) error {
+ if _, err := GetRunnerByID(ctx, id); err != nil {
+ return err
+ }
+
+ _, err := db.GetEngine(ctx).Delete(&ActionRunner{ID: id})
+ return err
+}
+
+// CreateRunner creates new runner.
+func CreateRunner(ctx context.Context, t *ActionRunner) error {
+ _, err := db.GetEngine(ctx).Insert(t)
+ return err
+}
diff --git a/models/actions/runner_list.go b/models/actions/runner_list.go
new file mode 100644
index 0000000000..87f0886b47
--- /dev/null
+++ b/models/actions/runner_list.go
@@ -0,0 +1,77 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "context"
+
+ "code.gitea.io/gitea/models/db"
+ repo_model "code.gitea.io/gitea/models/repo"
+ user_model "code.gitea.io/gitea/models/user"
+ "code.gitea.io/gitea/modules/container"
+)
+
+type RunnerList []*ActionRunner
+
+// GetUserIDs returns a slice of user's id
+func (runners RunnerList) GetUserIDs() []int64 {
+ ids := make(container.Set[int64], len(runners))
+ for _, runner := range runners {
+ if runner.OwnerID == 0 {
+ continue
+ }
+ ids.Add(runner.OwnerID)
+ }
+ return ids.Values()
+}
+
+func (runners RunnerList) LoadOwners(ctx context.Context) error {
+ userIDs := runners.GetUserIDs()
+ users := make(map[int64]*user_model.User, len(userIDs))
+ if err := db.GetEngine(ctx).In("id", userIDs).Find(&users); err != nil {
+ return err
+ }
+ for _, runner := range runners {
+ if runner.OwnerID > 0 && runner.Owner == nil {
+ runner.Owner = users[runner.OwnerID]
+ }
+ }
+ return nil
+}
+
+func (runners RunnerList) getRepoIDs() []int64 {
+ repoIDs := make(container.Set[int64], len(runners))
+ for _, runner := range runners {
+ if runner.RepoID == 0 {
+ continue
+ }
+ if _, ok := repoIDs[runner.RepoID]; !ok {
+ repoIDs[runner.RepoID] = struct{}{}
+ }
+ }
+ return repoIDs.Values()
+}
+
+func (runners RunnerList) LoadRepos(ctx context.Context) error {
+ repoIDs := runners.getRepoIDs()
+ repos := make(map[int64]*repo_model.Repository, len(repoIDs))
+ if err := db.GetEngine(ctx).In("id", repoIDs).Find(&repos); err != nil {
+ return err
+ }
+
+ for _, runner := range runners {
+ if runner.RepoID > 0 && runner.Repo == nil {
+ runner.Repo = repos[runner.RepoID]
+ }
+ }
+ return nil
+}
+
+func (runners RunnerList) LoadAttributes(ctx context.Context) error {
+ if err := runners.LoadOwners(ctx); err != nil {
+ return err
+ }
+
+ return runners.LoadRepos(ctx)
+}
diff --git a/models/actions/runner_token.go b/models/actions/runner_token.go
new file mode 100644
index 0000000000..fabd6c644c
--- /dev/null
+++ b/models/actions/runner_token.go
@@ -0,0 +1,86 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "context"
+ "fmt"
+
+ "code.gitea.io/gitea/models/db"
+ repo_model "code.gitea.io/gitea/models/repo"
+ user_model "code.gitea.io/gitea/models/user"
+ "code.gitea.io/gitea/modules/timeutil"
+ "code.gitea.io/gitea/modules/util"
+)
+
+// ActionRunnerToken represents runner tokens
+type ActionRunnerToken struct {
+ ID int64
+ Token string `xorm:"UNIQUE"`
+ OwnerID int64 `xorm:"index"` // org level runner, 0 means system
+ Owner *user_model.User `xorm:"-"`
+ RepoID int64 `xorm:"index"` // repo level runner, if orgid also is zero, then it's a global
+ Repo *repo_model.Repository `xorm:"-"`
+ IsActive bool
+
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated"`
+ Deleted timeutil.TimeStamp `xorm:"deleted"`
+}
+
+func init() {
+ db.RegisterModel(new(ActionRunnerToken))
+}
+
+// GetRunnerToken returns a action runner via token
+func GetRunnerToken(ctx context.Context, token string) (*ActionRunnerToken, error) {
+ var runnerToken ActionRunnerToken
+ has, err := db.GetEngine(ctx).Where("token=?", token).Get(&runnerToken)
+ if err != nil {
+ return nil, err
+ } else if !has {
+ return nil, fmt.Errorf("runner token %q: %w", token, util.ErrNotExist)
+ }
+ return &runnerToken, nil
+}
+
+// UpdateRunnerToken updates runner token information.
+func UpdateRunnerToken(ctx context.Context, r *ActionRunnerToken, cols ...string) (err error) {
+ e := db.GetEngine(ctx)
+
+ if len(cols) == 0 {
+ _, err = e.ID(r.ID).AllCols().Update(r)
+ } else {
+ _, err = e.ID(r.ID).Cols(cols...).Update(r)
+ }
+ return err
+}
+
+// NewRunnerToken creates a new runner token
+func NewRunnerToken(ctx context.Context, ownerID, repoID int64) (*ActionRunnerToken, error) {
+ token, err := util.CryptoRandomString(40)
+ if err != nil {
+ return nil, err
+ }
+ runnerToken := &ActionRunnerToken{
+ OwnerID: ownerID,
+ RepoID: repoID,
+ IsActive: false,
+ Token: token,
+ }
+ _, err = db.GetEngine(ctx).Insert(runnerToken)
+ return runnerToken, err
+}
+
+// GetUnactivatedRunnerToken returns a unactivated runner token
+func GetUnactivatedRunnerToken(ctx context.Context, ownerID, repoID int64) (*ActionRunnerToken, error) {
+ var runnerToken ActionRunnerToken
+ has, err := db.GetEngine(ctx).Where("owner_id=? AND repo_id=? AND is_active=?", ownerID, repoID, false).OrderBy("id DESC").Get(&runnerToken)
+ if err != nil {
+ return nil, err
+ } else if !has {
+ return nil, fmt.Errorf("runner token: %w", util.ErrNotExist)
+ }
+ return &runnerToken, nil
+}
diff --git a/models/actions/status.go b/models/actions/status.go
new file mode 100644
index 0000000000..059cf9bc09
--- /dev/null
+++ b/models/actions/status.go
@@ -0,0 +1,100 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "code.gitea.io/gitea/modules/translation"
+
+ runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
+)
+
+// Status represents the status of ActionRun, ActionRunJob, ActionTask, or ActionTaskStep
+type Status int
+
+const (
+ StatusUnknown Status = iota // 0, consistent with runnerv1.Result_RESULT_UNSPECIFIED
+ StatusSuccess // 1, consistent with runnerv1.Result_RESULT_SUCCESS
+ StatusFailure // 2, consistent with runnerv1.Result_RESULT_FAILURE
+ StatusCancelled // 3, consistent with runnerv1.Result_RESULT_CANCELLED
+ StatusSkipped // 4, consistent with runnerv1.Result_RESULT_SKIPPED
+ StatusWaiting // 5, isn't a runnerv1.Result
+ StatusRunning // 6, isn't a runnerv1.Result
+ StatusBlocked // 7, isn't a runnerv1.Result
+)
+
+var statusNames = map[Status]string{
+ StatusUnknown: "unknown",
+ StatusWaiting: "waiting",
+ StatusRunning: "running",
+ StatusSuccess: "success",
+ StatusFailure: "failure",
+ StatusCancelled: "cancelled",
+ StatusSkipped: "skipped",
+ StatusBlocked: "blocked",
+}
+
+// String returns the string name of the Status
+func (s Status) String() string {
+ return statusNames[s]
+}
+
+// LocaleString returns the locale string name of the Status
+func (s Status) LocaleString(lang translation.Locale) string {
+ return lang.Tr("actions.status." + s.String())
+}
+
+// IsDone returns whether the Status is final
+func (s Status) IsDone() bool {
+ return s.In(StatusSuccess, StatusFailure, StatusCancelled, StatusSkipped)
+}
+
+// HasRun returns whether the Status is a result of running
+func (s Status) HasRun() bool {
+ return s.In(StatusSuccess, StatusFailure)
+}
+
+func (s Status) IsUnknown() bool {
+ return s == StatusUnknown
+}
+
+func (s Status) IsSuccess() bool {
+ return s == StatusSuccess
+}
+
+func (s Status) IsFailure() bool {
+ return s == StatusFailure
+}
+
+func (s Status) IsCancelled() bool {
+ return s == StatusCancelled
+}
+
+func (s Status) IsSkipped() bool {
+ return s == StatusSkipped
+}
+
+func (s Status) IsWaiting() bool {
+ return s == StatusWaiting
+}
+
+func (s Status) IsRunning() bool {
+ return s == StatusRunning
+}
+
+// In returns whether s is one of the given statuses
+func (s Status) In(statuses ...Status) bool {
+ for _, v := range statuses {
+ if s == v {
+ return true
+ }
+ }
+ return false
+}
+
+func (s Status) AsResult() runnerv1.Result {
+ if s.IsDone() {
+ return runnerv1.Result(s)
+ }
+ return runnerv1.Result_RESULT_UNSPECIFIED
+}
diff --git a/models/actions/task.go b/models/actions/task.go
new file mode 100644
index 0000000000..5b6206c346
--- /dev/null
+++ b/models/actions/task.go
@@ -0,0 +1,504 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "context"
+ "crypto/subtle"
+ "fmt"
+ "time"
+
+ auth_model "code.gitea.io/gitea/models/auth"
+ "code.gitea.io/gitea/models/db"
+ "code.gitea.io/gitea/modules/container"
+ "code.gitea.io/gitea/modules/log"
+ "code.gitea.io/gitea/modules/setting"
+ "code.gitea.io/gitea/modules/timeutil"
+ "code.gitea.io/gitea/modules/util"
+
+ runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
+ lru "github.com/hashicorp/golang-lru"
+ "github.com/nektos/act/pkg/jobparser"
+ "google.golang.org/protobuf/types/known/timestamppb"
+ "xorm.io/builder"
+)
+
+// ActionTask represents a distribution of job
+type ActionTask struct {
+ ID int64
+ JobID int64
+ Job *ActionRunJob `xorm:"-"`
+ Steps []*ActionTaskStep `xorm:"-"`
+ Attempt int64
+ RunnerID int64 `xorm:"index"`
+ Status Status `xorm:"index"`
+ Started timeutil.TimeStamp `xorm:"index"`
+ Stopped timeutil.TimeStamp
+
+ RepoID int64 `xorm:"index"`
+ OwnerID int64 `xorm:"index"`
+ CommitSHA string `xorm:"index"`
+ IsForkPullRequest bool
+
+ Token string `xorm:"-"`
+ TokenHash string `xorm:"UNIQUE"` // sha256 of token
+ TokenSalt string
+ TokenLastEight string `xorm:"index token_last_eight"`
+
+ LogFilename string // file name of log
+ LogInStorage bool // read log from database or from storage
+ LogLength int64 // lines count
+ LogSize int64 // blob size
+ LogIndexes LogIndexes `xorm:"LONGBLOB"` // line number to offset
+ LogExpired bool // files that are too old will be deleted
+
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated index"`
+}
+
+var successfulTokenTaskCache *lru.Cache
+
+func init() {
+ db.RegisterModel(new(ActionTask), func() error {
+ if setting.SuccessfulTokensCacheSize > 0 {
+ var err error
+ successfulTokenTaskCache, err = lru.New(setting.SuccessfulTokensCacheSize)
+ if err != nil {
+ return fmt.Errorf("unable to allocate Task cache: %v", err)
+ }
+ } else {
+ successfulTokenTaskCache = nil
+ }
+ return nil
+ })
+}
+
+func (task *ActionTask) Duration() time.Duration {
+ return calculateDuration(task.Started, task.Stopped, task.Status)
+}
+
+func (task *ActionTask) IsStopped() bool {
+ return task.Stopped > 0
+}
+
+func (task *ActionTask) GetRunLink() string {
+ if task.Job == nil || task.Job.Run == nil {
+ return ""
+ }
+ return task.Job.Run.Link()
+}
+
+func (task *ActionTask) GetCommitLink() string {
+ if task.Job == nil || task.Job.Run == nil || task.Job.Run.Repo == nil {
+ return ""
+ }
+ return task.Job.Run.Repo.CommitLink(task.CommitSHA)
+}
+
+func (task *ActionTask) GetRepoName() string {
+ if task.Job == nil || task.Job.Run == nil || task.Job.Run.Repo == nil {
+ return ""
+ }
+ return task.Job.Run.Repo.FullName()
+}
+
+func (task *ActionTask) GetRepoLink() string {
+ if task.Job == nil || task.Job.Run == nil || task.Job.Run.Repo == nil {
+ return ""
+ }
+ return task.Job.Run.Repo.Link()
+}
+
+func (task *ActionTask) LoadJob(ctx context.Context) error {
+ if task.Job == nil {
+ job, err := GetRunJobByID(ctx, task.JobID)
+ if err != nil {
+ return err
+ }
+ task.Job = job
+ }
+ return nil
+}
+
+// LoadAttributes load Job Steps if not loaded
+func (task *ActionTask) LoadAttributes(ctx context.Context) error {
+ if task == nil {
+ return nil
+ }
+ if err := task.LoadJob(ctx); err != nil {
+ return err
+ }
+
+ if err := task.Job.LoadAttributes(ctx); err != nil {
+ return err
+ }
+
+ if task.Steps == nil { // be careful, an empty slice (not nil) also means loaded
+ steps, err := GetTaskStepsByTaskID(ctx, task.ID)
+ if err != nil {
+ return err
+ }
+ task.Steps = steps
+ }
+
+ return nil
+}
+
+func (task *ActionTask) GenerateToken() (err error) {
+ task.Token, task.TokenSalt, task.TokenHash, task.TokenLastEight, err = generateSaltedToken()
+ return err
+}
+
+func GetTaskByID(ctx context.Context, id int64) (*ActionTask, error) {
+ var task ActionTask
+ has, err := db.GetEngine(ctx).Where("id=?", id).Get(&task)
+ if err != nil {
+ return nil, err
+ } else if !has {
+ return nil, fmt.Errorf("task with id %d: %w", id, util.ErrNotExist)
+ }
+
+ return &task, nil
+}
+
+func GetRunningTaskByToken(ctx context.Context, token string) (*ActionTask, error) {
+ errNotExist := fmt.Errorf("task with token %q: %w", token, util.ErrNotExist)
+ if token == "" {
+ return nil, errNotExist
+ }
+ // A token is defined as being SHA1 sum these are 40 hexadecimal bytes long
+ if len(token) != 40 {
+ return nil, errNotExist
+ }
+ for _, x := range []byte(token) {
+ if x < '0' || (x > '9' && x < 'a') || x > 'f' {
+ return nil, errNotExist
+ }
+ }
+
+ lastEight := token[len(token)-8:]
+
+ if id := getTaskIDFromCache(token); id > 0 {
+ task := &ActionTask{
+ TokenLastEight: lastEight,
+ }
+ // Re-get the task from the db in case it has been deleted in the intervening period
+ has, err := db.GetEngine(ctx).ID(id).Get(task)
+ if err != nil {
+ return nil, err
+ }
+ if has {
+ return task, nil
+ }
+ successfulTokenTaskCache.Remove(token)
+ }
+
+ var tasks []*ActionTask
+ err := db.GetEngine(ctx).Where("token_last_eight = ? AND status = ?", lastEight, StatusRunning).Find(&tasks)
+ if err != nil {
+ return nil, err
+ } else if len(tasks) == 0 {
+ return nil, errNotExist
+ }
+
+ for _, t := range tasks {
+ tempHash := auth_model.HashToken(token, t.TokenSalt)
+ if subtle.ConstantTimeCompare([]byte(t.TokenHash), []byte(tempHash)) == 1 {
+ if successfulTokenTaskCache != nil {
+ successfulTokenTaskCache.Add(token, t.ID)
+ }
+ return t, nil
+ }
+ }
+ return nil, errNotExist
+}
+
+func CreateTaskForRunner(ctx context.Context, runner *ActionRunner) (*ActionTask, bool, error) {
+ dbCtx, commiter, err := db.TxContext(ctx)
+ if err != nil {
+ return nil, false, err
+ }
+ defer commiter.Close()
+ ctx = dbCtx.WithContext(ctx)
+
+ e := db.GetEngine(ctx)
+
+ jobCond := builder.NewCond()
+ if runner.RepoID != 0 {
+ jobCond = builder.Eq{"repo_id": runner.RepoID}
+ } else if runner.OwnerID != 0 {
+ jobCond = builder.In("repo_id", builder.Select("id").From("repository").Where(builder.Eq{"owner_id": runner.OwnerID}))
+ }
+ if jobCond.IsValid() {
+ jobCond = builder.In("run_id", builder.Select("id").From("action_run").Where(jobCond))
+ }
+
+ var jobs []*ActionRunJob
+ if err := e.Where("task_id=? AND status=?", 0, StatusWaiting).And(jobCond).Asc("id").Find(&jobs); err != nil {
+ return nil, false, err
+ }
+
+ // TODO: a more efficient way to filter labels
+ var job *ActionRunJob
+ labels := runner.AgentLabels
+ labels = append(labels, runner.CustomLabels...)
+ log.Trace("runner labels: %v", labels)
+ for _, v := range jobs {
+ if isSubset(labels, v.RunsOn) {
+ job = v
+ break
+ }
+ }
+ if job == nil {
+ return nil, false, nil
+ }
+ if err := job.LoadAttributes(ctx); err != nil {
+ return nil, false, err
+ }
+
+ now := timeutil.TimeStampNow()
+ job.Attempt++
+ job.Started = now
+ job.Status = StatusRunning
+
+ task := &ActionTask{
+ JobID: job.ID,
+ Attempt: job.Attempt,
+ RunnerID: runner.ID,
+ Started: now,
+ Status: StatusRunning,
+ RepoID: job.RepoID,
+ OwnerID: job.OwnerID,
+ CommitSHA: job.CommitSHA,
+ IsForkPullRequest: job.IsForkPullRequest,
+ }
+ if err := task.GenerateToken(); err != nil {
+ return nil, false, err
+ }
+
+ var workflowJob *jobparser.Job
+ if gots, err := jobparser.Parse(job.WorkflowPayload); err != nil {
+ return nil, false, fmt.Errorf("parse workflow of job %d: %w", job.ID, err)
+ } else if len(gots) != 1 {
+ return nil, false, fmt.Errorf("workflow of job %d: not signle workflow", job.ID)
+ } else {
+ _, workflowJob = gots[0].Job()
+ }
+
+ if _, err := e.Insert(task); err != nil {
+ return nil, false, err
+ }
+
+ task.LogFilename = logFileName(job.Run.Repo.FullName(), task.ID)
+ if _, err := e.ID(task.ID).Cols("log_filename").Update(task); err != nil {
+ return nil, false, err
+ }
+
+ if len(workflowJob.Steps) > 0 {
+ steps := make([]*ActionTaskStep, len(workflowJob.Steps))
+ for i, v := range workflowJob.Steps {
+ steps[i] = &ActionTaskStep{
+ Name: v.String(),
+ TaskID: task.ID,
+ Index: int64(i),
+ RepoID: task.RepoID,
+ Status: StatusWaiting,
+ }
+ }
+ if _, err := e.Insert(steps); err != nil {
+ return nil, false, err
+ }
+ task.Steps = steps
+ }
+
+ job.TaskID = task.ID
+ if n, err := UpdateRunJob(ctx, job, builder.Eq{"task_id": 0}); err != nil {
+ return nil, false, err
+ } else if n != 1 {
+ return nil, false, nil
+ }
+
+ if job.Run.Status.IsWaiting() {
+ job.Run.Status = StatusRunning
+ job.Run.Started = now
+ if err := UpdateRun(ctx, job.Run, "status", "started"); err != nil {
+ return nil, false, err
+ }
+ }
+
+ task.Job = job
+
+ if err := commiter.Commit(); err != nil {
+ return nil, false, err
+ }
+
+ return task, true, nil
+}
+
+func UpdateTask(ctx context.Context, task *ActionTask, cols ...string) error {
+ sess := db.GetEngine(ctx).ID(task.ID)
+ if len(cols) > 0 {
+ sess.Cols(cols...)
+ }
+ _, err := sess.Update(task)
+ return err
+}
+
+func UpdateTaskByState(ctx context.Context, state *runnerv1.TaskState) (*ActionTask, error) {
+ stepStates := map[int64]*runnerv1.StepState{}
+ for _, v := range state.Steps {
+ stepStates[v.Id] = v
+ }
+
+ ctx, commiter, err := db.TxContext(ctx)
+ if err != nil {
+ return nil, err
+ }
+ defer commiter.Close()
+
+ e := db.GetEngine(ctx)
+
+ task := &ActionTask{}
+ if has, err := e.ID(state.Id).Get(task); err != nil {
+ return nil, err
+ } else if !has {
+ return nil, util.ErrNotExist
+ }
+
+ if state.Result != runnerv1.Result_RESULT_UNSPECIFIED {
+ task.Status = Status(state.Result)
+ task.Stopped = timeutil.TimeStamp(state.StoppedAt.AsTime().Unix())
+ if _, err := UpdateRunJob(ctx, &ActionRunJob{
+ ID: task.JobID,
+ Status: task.Status,
+ Stopped: task.Stopped,
+ }, nil); err != nil {
+ return nil, err
+ }
+ }
+
+ if _, err := e.ID(task.ID).Update(task); err != nil {
+ return nil, err
+ }
+
+ if err := task.LoadAttributes(ctx); err != nil {
+ return nil, err
+ }
+
+ for _, step := range task.Steps {
+ var result runnerv1.Result
+ if v, ok := stepStates[step.Index]; ok {
+ result = v.Result
+ step.LogIndex = v.LogIndex
+ step.LogLength = v.LogLength
+ step.Started = convertTimestamp(v.StartedAt)
+ step.Stopped = convertTimestamp(v.StoppedAt)
+ }
+ if result != runnerv1.Result_RESULT_UNSPECIFIED {
+ step.Status = Status(result)
+ } else if step.Started != 0 {
+ step.Status = StatusRunning
+ }
+ if _, err := e.ID(step.ID).Update(step); err != nil {
+ return nil, err
+ }
+ }
+
+ if err := commiter.Commit(); err != nil {
+ return nil, err
+ }
+
+ return task, nil
+}
+
+func StopTask(ctx context.Context, taskID int64, status Status) error {
+ if !status.IsDone() {
+ return fmt.Errorf("cannot stop task with status %v", status)
+ }
+ e := db.GetEngine(ctx)
+
+ task := &ActionTask{}
+ if has, err := e.ID(taskID).Get(task); err != nil {
+ return err
+ } else if !has {
+ return util.ErrNotExist
+ }
+ if task.Status.IsDone() {
+ return nil
+ }
+
+ now := timeutil.TimeStampNow()
+ task.Status = status
+ task.Stopped = now
+ if _, err := UpdateRunJob(ctx, &ActionRunJob{
+ ID: task.JobID,
+ Status: task.Status,
+ Stopped: task.Stopped,
+ }, nil); err != nil {
+ return err
+ }
+
+ if _, err := e.ID(task.ID).Update(task); err != nil {
+ return err
+ }
+
+ if err := task.LoadAttributes(ctx); err != nil {
+ return err
+ }
+
+ for _, step := range task.Steps {
+ if !step.Status.IsDone() {
+ step.Status = status
+ if step.Started == 0 {
+ step.Started = now
+ }
+ step.Stopped = now
+ }
+ if _, err := e.ID(step.ID).Update(step); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func isSubset(set, subset []string) bool {
+ m := make(container.Set[string], len(set))
+ for _, v := range set {
+ m.Add(v)
+ }
+
+ for _, v := range subset {
+ if !m.Contains(v) {
+ return false
+ }
+ }
+ return true
+}
+
+func convertTimestamp(timestamp *timestamppb.Timestamp) timeutil.TimeStamp {
+ if timestamp.GetSeconds() == 0 && timestamp.GetNanos() == 0 {
+ return timeutil.TimeStamp(0)
+ }
+ return timeutil.TimeStamp(timestamp.AsTime().Unix())
+}
+
+func logFileName(repoFullName string, taskID int64) string {
+ return fmt.Sprintf("%s/%02x/%d.log", repoFullName, taskID%256, taskID)
+}
+
+func getTaskIDFromCache(token string) int64 {
+ if successfulTokenTaskCache == nil {
+ return 0
+ }
+ tInterface, ok := successfulTokenTaskCache.Get(token)
+ if !ok {
+ return 0
+ }
+ t, ok := tInterface.(int64)
+ if !ok {
+ return 0
+ }
+ return t
+}
diff --git a/models/actions/task_list.go b/models/actions/task_list.go
new file mode 100644
index 0000000000..1f6b16772b
--- /dev/null
+++ b/models/actions/task_list.go
@@ -0,0 +1,105 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "context"
+
+ "code.gitea.io/gitea/models/db"
+ "code.gitea.io/gitea/modules/container"
+ "code.gitea.io/gitea/modules/timeutil"
+
+ "xorm.io/builder"
+)
+
+type TaskList []*ActionTask
+
+func (tasks TaskList) GetJobIDs() []int64 {
+ ids := make(container.Set[int64], len(tasks))
+ for _, t := range tasks {
+ if t.JobID == 0 {
+ continue
+ }
+ ids.Add(t.JobID)
+ }
+ return ids.Values()
+}
+
+func (tasks TaskList) LoadJobs(ctx context.Context) error {
+ jobIDs := tasks.GetJobIDs()
+ jobs := make(map[int64]*ActionRunJob, len(jobIDs))
+ if err := db.GetEngine(ctx).In("id", jobIDs).Find(&jobs); err != nil {
+ return err
+ }
+ for _, t := range tasks {
+ if t.JobID > 0 && t.Job == nil {
+ t.Job = jobs[t.JobID]
+ }
+ }
+
+ // TODO: Replace with "ActionJobList(maps.Values(jobs))" once available
+ var jobsList ActionJobList = make([]*ActionRunJob, 0, len(jobs))
+ for _, j := range jobs {
+ jobsList = append(jobsList, j)
+ }
+ return jobsList.LoadAttributes(ctx, true)
+}
+
+func (tasks TaskList) LoadAttributes(ctx context.Context) error {
+ return tasks.LoadJobs(ctx)
+}
+
+type FindTaskOptions struct {
+ db.ListOptions
+ RepoID int64
+ OwnerID int64
+ CommitSHA string
+ Status Status
+ UpdatedBefore timeutil.TimeStamp
+ StartedBefore timeutil.TimeStamp
+ RunnerID int64
+ IDOrderDesc bool
+}
+
+func (opts FindTaskOptions) toConds() builder.Cond {
+ cond := builder.NewCond()
+ if opts.RepoID > 0 {
+ cond = cond.And(builder.Eq{"repo_id": opts.RepoID})
+ }
+ if opts.OwnerID > 0 {
+ cond = cond.And(builder.Eq{"owner_id": opts.OwnerID})
+ }
+ if opts.CommitSHA != "" {
+ cond = cond.And(builder.Eq{"commit_sha": opts.CommitSHA})
+ }
+ if opts.Status > StatusUnknown {
+ cond = cond.And(builder.Eq{"status": opts.Status})
+ }
+ if opts.UpdatedBefore > 0 {
+ cond = cond.And(builder.Lt{"updated": opts.UpdatedBefore})
+ }
+ if opts.StartedBefore > 0 {
+ cond = cond.And(builder.Lt{"started": opts.StartedBefore})
+ }
+ if opts.RunnerID > 0 {
+ cond = cond.And(builder.Eq{"runner_id": opts.RunnerID})
+ }
+ return cond
+}
+
+func FindTasks(ctx context.Context, opts FindTaskOptions) (TaskList, error) {
+ e := db.GetEngine(ctx).Where(opts.toConds())
+ if opts.PageSize > 0 && opts.Page >= 1 {
+ e.Limit(opts.PageSize, (opts.Page-1)*opts.PageSize)
+ }
+ if opts.IDOrderDesc {
+ e.OrderBy("id DESC")
+ }
+ var tasks TaskList
+ return tasks, e.Find(&tasks)
+}
+
+func CountTasks(ctx context.Context, opts FindTaskOptions) (int64, error) {
+ return db.GetEngine(ctx).Where(opts.toConds()).Count(new(ActionTask))
+}
diff --git a/models/actions/task_step.go b/models/actions/task_step.go
new file mode 100644
index 0000000000..3af1fe3f5a
--- /dev/null
+++ b/models/actions/task_step.go
@@ -0,0 +1,41 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "context"
+ "time"
+
+ "code.gitea.io/gitea/models/db"
+ "code.gitea.io/gitea/modules/timeutil"
+)
+
+// ActionTaskStep represents a step of ActionTask
+type ActionTaskStep struct {
+ ID int64
+ Name string `xorm:"VARCHAR(255)"`
+ TaskID int64 `xorm:"index unique(task_index)"`
+ Index int64 `xorm:"index unique(task_index)"`
+ RepoID int64 `xorm:"index"`
+ Status Status `xorm:"index"`
+ LogIndex int64
+ LogLength int64
+ Started timeutil.TimeStamp
+ Stopped timeutil.TimeStamp
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated"`
+}
+
+func (step *ActionTaskStep) Duration() time.Duration {
+ return calculateDuration(step.Started, step.Stopped, step.Status)
+}
+
+func init() {
+ db.RegisterModel(new(ActionTaskStep))
+}
+
+func GetTaskStepsByTaskID(ctx context.Context, taskID int64) ([]*ActionTaskStep, error) {
+ var steps []*ActionTaskStep
+ return steps, db.GetEngine(ctx).Where("task_id=?", taskID).OrderBy("`index` ASC").Find(&steps)
+}
diff --git a/models/actions/utils.go b/models/actions/utils.go
new file mode 100644
index 0000000000..12657942fc
--- /dev/null
+++ b/models/actions/utils.go
@@ -0,0 +1,84 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "bytes"
+ "encoding/binary"
+ "encoding/hex"
+ "errors"
+ "fmt"
+ "io"
+ "time"
+
+ auth_model "code.gitea.io/gitea/models/auth"
+ "code.gitea.io/gitea/modules/timeutil"
+ "code.gitea.io/gitea/modules/util"
+)
+
+func generateSaltedToken() (string, string, string, string, error) {
+ salt, err := util.CryptoRandomString(10)
+ if err != nil {
+ return "", "", "", "", err
+ }
+ buf, err := util.CryptoRandomBytes(20)
+ if err != nil {
+ return "", "", "", "", err
+ }
+ token := hex.EncodeToString(buf)
+ hash := auth_model.HashToken(token, salt)
+ return token, salt, hash, token[len(token)-8:], nil
+}
+
+/*
+LogIndexes is the index for mapping log line number to buffer offset.
+Because it uses varint encoding, it is impossible to predict its size.
+But we can make a simple estimate with an assumption that each log line has 200 byte, then:
+| lines | file size | index size |
+|-----------|---------------------|--------------------|
+| 100 | 20 KiB(20000) | 258 B(258) |
+| 1000 | 195 KiB(200000) | 2.9 KiB(2958) |
+| 10000 | 1.9 MiB(2000000) | 34 KiB(34715) |
+| 100000 | 19 MiB(20000000) | 386 KiB(394715) |
+| 1000000 | 191 MiB(200000000) | 4.1 MiB(4323626) |
+| 10000000 | 1.9 GiB(2000000000) | 47 MiB(49323626) |
+| 100000000 | 19 GiB(20000000000) | 490 MiB(513424280) |
+*/
+type LogIndexes []int64
+
+func (indexes *LogIndexes) FromDB(b []byte) error {
+ reader := bytes.NewReader(b)
+ for {
+ v, err := binary.ReadVarint(reader)
+ if err != nil {
+ if errors.Is(err, io.EOF) {
+ return nil
+ }
+ return fmt.Errorf("binary ReadVarint: %w", err)
+ }
+ *indexes = append(*indexes, v)
+ }
+}
+
+func (indexes *LogIndexes) ToDB() ([]byte, error) {
+ buf, i := make([]byte, binary.MaxVarintLen64*len(*indexes)), 0
+ for _, v := range *indexes {
+ n := binary.PutVarint(buf[i:], v)
+ i += n
+ }
+ return buf[:i], nil
+}
+
+var timeSince = time.Since
+
+func calculateDuration(started, stopped timeutil.TimeStamp, status Status) time.Duration {
+ if started == 0 {
+ return 0
+ }
+ s := started.AsTime()
+ if status.IsDone() {
+ return stopped.AsTime().Sub(s)
+ }
+ return timeSince(s).Truncate(time.Second)
+}
diff --git a/models/actions/utils_test.go b/models/actions/utils_test.go
new file mode 100644
index 0000000000..98c048d4ef
--- /dev/null
+++ b/models/actions/utils_test.go
@@ -0,0 +1,90 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package actions
+
+import (
+ "math"
+ "testing"
+ "time"
+
+ "code.gitea.io/gitea/modules/timeutil"
+
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+)
+
+func TestLogIndexes_ToDB(t *testing.T) {
+ tests := []struct {
+ indexes LogIndexes
+ }{
+ {
+ indexes: []int64{1, 2, 0, -1, -2, math.MaxInt64, math.MinInt64},
+ },
+ }
+ for _, tt := range tests {
+ t.Run("", func(t *testing.T) {
+ got, err := tt.indexes.ToDB()
+ require.NoError(t, err)
+
+ indexes := LogIndexes{}
+ require.NoError(t, indexes.FromDB(got))
+
+ assert.Equal(t, tt.indexes, indexes)
+ })
+ }
+}
+
+func Test_calculateDuration(t *testing.T) {
+ oldTimeSince := timeSince
+ defer func() {
+ timeSince = oldTimeSince
+ }()
+
+ timeSince = func(t time.Time) time.Duration {
+ return timeutil.TimeStamp(1000).AsTime().Sub(t)
+ }
+ type args struct {
+ started timeutil.TimeStamp
+ stopped timeutil.TimeStamp
+ status Status
+ }
+ tests := []struct {
+ name string
+ args args
+ want time.Duration
+ }{
+ {
+ name: "unknown",
+ args: args{
+ started: 0,
+ stopped: 0,
+ status: StatusUnknown,
+ },
+ want: 0,
+ },
+ {
+ name: "running",
+ args: args{
+ started: 500,
+ stopped: 0,
+ status: StatusRunning,
+ },
+ want: 500 * time.Second,
+ },
+ {
+ name: "done",
+ args: args{
+ started: 500,
+ stopped: 600,
+ status: StatusSuccess,
+ },
+ want: 100 * time.Second,
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ assert.Equalf(t, tt.want, calculateDuration(tt.args.started, tt.args.stopped, tt.args.status), "calculateDuration(%v, %v, %v)", tt.args.started, tt.args.stopped, tt.args.status)
+ })
+ }
+}
diff --git a/models/dbfs/dbfile.go b/models/dbfs/dbfile.go
new file mode 100644
index 0000000000..bac1cb9eb6
--- /dev/null
+++ b/models/dbfs/dbfile.go
@@ -0,0 +1,357 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package dbfs
+
+import (
+ "context"
+ "errors"
+ "io"
+ "os"
+ "path/filepath"
+ "strconv"
+ "strings"
+ "time"
+
+ "code.gitea.io/gitea/models/db"
+)
+
+var defaultFileBlockSize int64 = 32 * 1024
+
+type File interface {
+ io.ReadWriteCloser
+ io.Seeker
+}
+
+type file struct {
+ ctx context.Context
+ metaID int64
+ fullPath string
+ blockSize int64
+
+ allowRead bool
+ allowWrite bool
+ offset int64
+}
+
+var _ File = (*file)(nil)
+
+func (f *file) readAt(fileMeta *dbfsMeta, offset int64, p []byte) (n int, err error) {
+ if offset >= fileMeta.FileSize {
+ return 0, io.EOF
+ }
+
+ blobPos := int(offset % f.blockSize)
+ blobOffset := offset - int64(blobPos)
+ blobRemaining := int(f.blockSize) - blobPos
+ needRead := len(p)
+ if needRead > blobRemaining {
+ needRead = blobRemaining
+ }
+ if blobOffset+int64(blobPos)+int64(needRead) > fileMeta.FileSize {
+ needRead = int(fileMeta.FileSize - blobOffset - int64(blobPos))
+ }
+ if needRead <= 0 {
+ return 0, io.EOF
+ }
+ var fileData dbfsData
+ ok, err := db.GetEngine(f.ctx).Where("meta_id = ? AND blob_offset = ?", f.metaID, blobOffset).Get(&fileData)
+ if err != nil {
+ return 0, err
+ }
+ blobData := fileData.BlobData
+ if !ok {
+ blobData = nil
+ }
+
+ canCopy := len(blobData) - blobPos
+ if canCopy <= 0 {
+ canCopy = 0
+ }
+ realRead := needRead
+ if realRead > canCopy {
+ realRead = canCopy
+ }
+ if realRead > 0 {
+ copy(p[:realRead], fileData.BlobData[blobPos:blobPos+realRead])
+ }
+ for i := realRead; i < needRead; i++ {
+ p[i] = 0
+ }
+ return needRead, nil
+}
+
+func (f *file) Read(p []byte) (n int, err error) {
+ if f.metaID == 0 || !f.allowRead {
+ return 0, os.ErrInvalid
+ }
+
+ fileMeta, err := findFileMetaByID(f.ctx, f.metaID)
+ if err != nil {
+ return 0, err
+ }
+ n, err = f.readAt(fileMeta, f.offset, p)
+ f.offset += int64(n)
+ return n, err
+}
+
+func (f *file) Write(p []byte) (n int, err error) {
+ if f.metaID == 0 || !f.allowWrite {
+ return 0, os.ErrInvalid
+ }
+
+ fileMeta, err := findFileMetaByID(f.ctx, f.metaID)
+ if err != nil {
+ return 0, err
+ }
+
+ needUpdateSize := false
+ written := 0
+ for len(p) > 0 {
+ blobPos := int(f.offset % f.blockSize)
+ blobOffset := f.offset - int64(blobPos)
+ blobRemaining := int(f.blockSize) - blobPos
+ needWrite := len(p)
+ if needWrite > blobRemaining {
+ needWrite = blobRemaining
+ }
+ buf := make([]byte, f.blockSize)
+ readBytes, err := f.readAt(fileMeta, blobOffset, buf)
+ if err != nil && !errors.Is(err, io.EOF) {
+ return written, err
+ }
+ copy(buf[blobPos:blobPos+needWrite], p[:needWrite])
+ if blobPos+needWrite > readBytes {
+ buf = buf[:blobPos+needWrite]
+ } else {
+ buf = buf[:readBytes]
+ }
+
+ fileData := dbfsData{
+ MetaID: fileMeta.ID,
+ BlobOffset: blobOffset,
+ BlobData: buf,
+ }
+ if res, err := db.GetEngine(f.ctx).Exec("UPDATE dbfs_data SET revision=revision+1, blob_data=? WHERE meta_id=? AND blob_offset=?", buf, fileMeta.ID, blobOffset); err != nil {
+ return written, err
+ } else if updated, err := res.RowsAffected(); err != nil {
+ return written, err
+ } else if updated == 0 {
+ if _, err = db.GetEngine(f.ctx).Insert(&fileData); err != nil {
+ return written, err
+ }
+ }
+ written += needWrite
+ f.offset += int64(needWrite)
+ if f.offset > fileMeta.FileSize {
+ fileMeta.FileSize = f.offset
+ needUpdateSize = true
+ }
+ p = p[needWrite:]
+ }
+
+ fileMetaUpdate := dbfsMeta{
+ ModifyTimestamp: timeToFileTimestamp(time.Now()),
+ }
+ if needUpdateSize {
+ fileMetaUpdate.FileSize = f.offset
+ }
+ if _, err := db.GetEngine(f.ctx).ID(fileMeta.ID).Update(fileMetaUpdate); err != nil {
+ return written, err
+ }
+ return written, nil
+}
+
+func (f *file) Seek(n int64, whence int) (int64, error) {
+ if f.metaID == 0 {
+ return 0, os.ErrInvalid
+ }
+
+ newOffset := f.offset
+ switch whence {
+ case io.SeekStart:
+ newOffset = n
+ case io.SeekCurrent:
+ newOffset += n
+ case io.SeekEnd:
+ size, err := f.size()
+ if err != nil {
+ return f.offset, err
+ }
+ newOffset = size + n
+ default:
+ return f.offset, os.ErrInvalid
+ }
+ if newOffset < 0 {
+ return f.offset, os.ErrInvalid
+ }
+ f.offset = newOffset
+ return newOffset, nil
+}
+
+func (f *file) Close() error {
+ return nil
+}
+
+func timeToFileTimestamp(t time.Time) int64 {
+ return t.UnixMicro()
+}
+
+func (f *file) loadMetaByPath() (*dbfsMeta, error) {
+ var fileMeta dbfsMeta
+ if ok, err := db.GetEngine(f.ctx).Where("full_path = ?", f.fullPath).Get(&fileMeta); err != nil {
+ return nil, err
+ } else if ok {
+ f.metaID = fileMeta.ID
+ f.blockSize = fileMeta.BlockSize
+ return &fileMeta, nil
+ }
+ return nil, nil
+}
+
+func (f *file) open(flag int) (err error) {
+ // see os.OpenFile for flag values
+ if flag&os.O_WRONLY != 0 {
+ f.allowWrite = true
+ } else if flag&os.O_RDWR != 0 {
+ f.allowRead = true
+ f.allowWrite = true
+ } else /* O_RDONLY */ {
+ f.allowRead = true
+ }
+
+ if f.allowWrite {
+ if flag&os.O_CREATE != 0 {
+ if flag&os.O_EXCL != 0 {
+ // file must not exist.
+ if f.metaID != 0 {
+ return os.ErrExist
+ }
+ } else {
+ // create a new file if none exists.
+ if f.metaID == 0 {
+ if err = f.createEmpty(); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ if flag&os.O_TRUNC != 0 {
+ if err = f.truncate(); err != nil {
+ return err
+ }
+ }
+ if flag&os.O_APPEND != 0 {
+ if _, err = f.Seek(0, io.SeekEnd); err != nil {
+ return err
+ }
+ }
+ return nil
+ }
+
+ // read only mode
+ if f.metaID == 0 {
+ return os.ErrNotExist
+ }
+ return nil
+}
+
+func (f *file) createEmpty() error {
+ if f.metaID != 0 {
+ return os.ErrExist
+ }
+ now := time.Now()
+ _, err := db.GetEngine(f.ctx).Insert(&dbfsMeta{
+ FullPath: f.fullPath,
+ BlockSize: f.blockSize,
+ CreateTimestamp: timeToFileTimestamp(now),
+ ModifyTimestamp: timeToFileTimestamp(now),
+ })
+ if err != nil {
+ return err
+ }
+ if _, err = f.loadMetaByPath(); err != nil {
+ return err
+ }
+ return nil
+}
+
+func (f *file) truncate() error {
+ if f.metaID == 0 {
+ return os.ErrNotExist
+ }
+ return db.WithTx(f.ctx, func(ctx context.Context) error {
+ if _, err := db.GetEngine(ctx).Exec("UPDATE dbfs_meta SET file_size = 0 WHERE id = ?", f.metaID); err != nil {
+ return err
+ }
+ if _, err := db.GetEngine(ctx).Delete(&dbfsData{MetaID: f.metaID}); err != nil {
+ return err
+ }
+ return nil
+ })
+}
+
+func (f *file) renameTo(newPath string) error {
+ if f.metaID == 0 {
+ return os.ErrNotExist
+ }
+ newPath = buildPath(newPath)
+ return db.WithTx(f.ctx, func(ctx context.Context) error {
+ if _, err := db.GetEngine(ctx).Exec("UPDATE dbfs_meta SET full_path = ? WHERE id = ?", newPath, f.metaID); err != nil {
+ return err
+ }
+ return nil
+ })
+}
+
+func (f *file) delete() error {
+ if f.metaID == 0 {
+ return os.ErrNotExist
+ }
+ return db.WithTx(f.ctx, func(ctx context.Context) error {
+ if _, err := db.GetEngine(ctx).Delete(&dbfsMeta{ID: f.metaID}); err != nil {
+ return err
+ }
+ if _, err := db.GetEngine(ctx).Delete(&dbfsData{MetaID: f.metaID}); err != nil {
+ return err
+ }
+ return nil
+ })
+}
+
+func (f *file) size() (int64, error) {
+ if f.metaID == 0 {
+ return 0, os.ErrNotExist
+ }
+ fileMeta, err := findFileMetaByID(f.ctx, f.metaID)
+ if err != nil {
+ return 0, err
+ }
+ return fileMeta.FileSize, nil
+}
+
+func findFileMetaByID(ctx context.Context, metaID int64) (*dbfsMeta, error) {
+ var fileMeta dbfsMeta
+ if ok, err := db.GetEngine(ctx).Where("id = ?", metaID).Get(&fileMeta); err != nil {
+ return nil, err
+ } else if ok {
+ return &fileMeta, nil
+ }
+ return nil, nil
+}
+
+func buildPath(path string) string {
+ path = filepath.Clean(path)
+ path = strings.ReplaceAll(path, "\\", "/")
+ path = strings.TrimPrefix(path, "/")
+ return strconv.Itoa(strings.Count(path, "/")) + ":" + path
+}
+
+func newDbFile(ctx context.Context, path string) (*file, error) {
+ path = buildPath(path)
+ f := &file{ctx: ctx, fullPath: path, blockSize: defaultFileBlockSize}
+ if _, err := f.loadMetaByPath(); err != nil {
+ return nil, err
+ }
+ return f, nil
+}
diff --git a/models/dbfs/dbfs.go b/models/dbfs/dbfs.go
new file mode 100644
index 0000000000..89d3dc7cbc
--- /dev/null
+++ b/models/dbfs/dbfs.go
@@ -0,0 +1,73 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package dbfs
+
+import (
+ "context"
+ "os"
+
+ "code.gitea.io/gitea/models/db"
+)
+
+type dbfsMeta struct {
+ ID int64 `xorm:"pk autoincr"`
+ FullPath string `xorm:"VARCHAR(500) UNIQUE NOT NULL"`
+ BlockSize int64 `xorm:"BIGINT NOT NULL"`
+ FileSize int64 `xorm:"BIGINT NOT NULL"`
+ CreateTimestamp int64 `xorm:"BIGINT NOT NULL"`
+ ModifyTimestamp int64 `xorm:"BIGINT NOT NULL"`
+}
+
+type dbfsData struct {
+ ID int64 `xorm:"pk autoincr"`
+ Revision int64 `xorm:"BIGINT NOT NULL"`
+ MetaID int64 `xorm:"BIGINT index(meta_offset) NOT NULL"`
+ BlobOffset int64 `xorm:"BIGINT index(meta_offset) NOT NULL"`
+ BlobSize int64 `xorm:"BIGINT NOT NULL"`
+ BlobData []byte `xorm:"BLOB NOT NULL"`
+}
+
+func init() {
+ db.RegisterModel(new(dbfsMeta))
+ db.RegisterModel(new(dbfsData))
+}
+
+func OpenFile(ctx context.Context, name string, flag int) (File, error) {
+ f, err := newDbFile(ctx, name)
+ if err != nil {
+ return nil, err
+ }
+ err = f.open(flag)
+ if err != nil {
+ _ = f.Close()
+ return nil, err
+ }
+ return f, nil
+}
+
+func Open(ctx context.Context, name string) (File, error) {
+ return OpenFile(ctx, name, os.O_RDONLY)
+}
+
+func Create(ctx context.Context, name string) (File, error) {
+ return OpenFile(ctx, name, os.O_RDWR|os.O_CREATE|os.O_TRUNC)
+}
+
+func Rename(ctx context.Context, oldPath, newPath string) error {
+ f, err := newDbFile(ctx, oldPath)
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+ return f.renameTo(newPath)
+}
+
+func Remove(ctx context.Context, name string) error {
+ f, err := newDbFile(ctx, name)
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+ return f.delete()
+}
diff --git a/models/dbfs/dbfs_test.go b/models/dbfs/dbfs_test.go
new file mode 100644
index 0000000000..30aa6463c5
--- /dev/null
+++ b/models/dbfs/dbfs_test.go
@@ -0,0 +1,179 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package dbfs
+
+import (
+ "bufio"
+ "io"
+ "os"
+ "testing"
+
+ "code.gitea.io/gitea/models/db"
+
+ "github.com/stretchr/testify/assert"
+
+ _ "github.com/mattn/go-sqlite3"
+)
+
+func changeDefaultFileBlockSize(n int64) (restore func()) {
+ old := defaultFileBlockSize
+ defaultFileBlockSize = n
+ return func() {
+ defaultFileBlockSize = old
+ }
+}
+
+func TestDbfsBasic(t *testing.T) {
+ defer changeDefaultFileBlockSize(4)()
+
+ // test basic write/read
+ f, err := OpenFile(db.DefaultContext, "test.txt", os.O_RDWR|os.O_CREATE)
+ assert.NoError(t, err)
+
+ n, err := f.Write([]byte("0123456789")) // blocks: 0123 4567 89
+ assert.NoError(t, err)
+ assert.EqualValues(t, 10, n)
+
+ _, err = f.Seek(0, io.SeekStart)
+ assert.NoError(t, err)
+
+ buf, err := io.ReadAll(f)
+ assert.NoError(t, err)
+ assert.EqualValues(t, 10, n)
+ assert.EqualValues(t, "0123456789", string(buf))
+
+ // write some new data
+ _, err = f.Seek(1, io.SeekStart)
+ assert.NoError(t, err)
+ _, err = f.Write([]byte("bcdefghi")) // blocks: 0bcd efgh i9
+ assert.NoError(t, err)
+
+ // read from offset
+ buf, err = io.ReadAll(f)
+ assert.NoError(t, err)
+ assert.EqualValues(t, "9", string(buf))
+
+ // read all
+ _, err = f.Seek(0, io.SeekStart)
+ assert.NoError(t, err)
+ buf, err = io.ReadAll(f)
+ assert.NoError(t, err)
+ assert.EqualValues(t, "0bcdefghi9", string(buf))
+
+ // write to new size
+ _, err = f.Seek(-1, io.SeekEnd)
+ assert.NoError(t, err)
+ _, err = f.Write([]byte("JKLMNOP")) // blocks: 0bcd efgh iJKL MNOP
+ assert.NoError(t, err)
+ _, err = f.Seek(0, io.SeekStart)
+ assert.NoError(t, err)
+ buf, err = io.ReadAll(f)
+ assert.NoError(t, err)
+ assert.EqualValues(t, "0bcdefghiJKLMNOP", string(buf))
+
+ // write beyond EOF and fill with zero
+ _, err = f.Seek(5, io.SeekCurrent)
+ assert.NoError(t, err)
+ _, err = f.Write([]byte("xyzu")) // blocks: 0bcd efgh iJKL MNOP 0000 0xyz u
+ assert.NoError(t, err)
+ _, err = f.Seek(0, io.SeekStart)
+ assert.NoError(t, err)
+ buf, err = io.ReadAll(f)
+ assert.NoError(t, err)
+ assert.EqualValues(t, "0bcdefghiJKLMNOP\x00\x00\x00\x00\x00xyzu", string(buf))
+
+ // write to the block with zeros
+ _, err = f.Seek(-6, io.SeekCurrent)
+ assert.NoError(t, err)
+ _, err = f.Write([]byte("ABCD")) // blocks: 0bcd efgh iJKL MNOP 000A BCDz u
+ assert.NoError(t, err)
+ _, err = f.Seek(0, io.SeekStart)
+ assert.NoError(t, err)
+ buf, err = io.ReadAll(f)
+ assert.NoError(t, err)
+ assert.EqualValues(t, "0bcdefghiJKLMNOP\x00\x00\x00ABCDzu", string(buf))
+
+ assert.NoError(t, f.Close())
+
+ // test rename
+ err = Rename(db.DefaultContext, "test.txt", "test2.txt")
+ assert.NoError(t, err)
+
+ _, err = OpenFile(db.DefaultContext, "test.txt", os.O_RDONLY)
+ assert.Error(t, err)
+
+ f, err = OpenFile(db.DefaultContext, "test2.txt", os.O_RDONLY)
+ assert.NoError(t, err)
+ assert.NoError(t, f.Close())
+
+ // test remove
+ err = Remove(db.DefaultContext, "test2.txt")
+ assert.NoError(t, err)
+
+ _, err = OpenFile(db.DefaultContext, "test2.txt", os.O_RDONLY)
+ assert.Error(t, err)
+}
+
+func TestDbfsReadWrite(t *testing.T) {
+ defer changeDefaultFileBlockSize(4)()
+
+ f1, err := OpenFile(db.DefaultContext, "test.log", os.O_RDWR|os.O_CREATE)
+ assert.NoError(t, err)
+ defer f1.Close()
+
+ f2, err := OpenFile(db.DefaultContext, "test.log", os.O_RDONLY)
+ assert.NoError(t, err)
+ defer f2.Close()
+
+ _, err = f1.Write([]byte("line 1\n"))
+ assert.NoError(t, err)
+
+ f2r := bufio.NewReader(f2)
+
+ line, err := f2r.ReadString('\n')
+ assert.NoError(t, err)
+ assert.EqualValues(t, "line 1\n", line)
+ _, err = f2r.ReadString('\n')
+ assert.ErrorIs(t, err, io.EOF)
+
+ _, err = f1.Write([]byte("line 2\n"))
+ assert.NoError(t, err)
+
+ line, err = f2r.ReadString('\n')
+ assert.NoError(t, err)
+ assert.EqualValues(t, "line 2\n", line)
+ _, err = f2r.ReadString('\n')
+ assert.ErrorIs(t, err, io.EOF)
+}
+
+func TestDbfsSeekWrite(t *testing.T) {
+ defer changeDefaultFileBlockSize(4)()
+
+ f, err := OpenFile(db.DefaultContext, "test2.log", os.O_RDWR|os.O_CREATE)
+ assert.NoError(t, err)
+ defer f.Close()
+
+ n, err := f.Write([]byte("111"))
+ assert.NoError(t, err)
+
+ _, err = f.Seek(int64(n), io.SeekStart)
+ assert.NoError(t, err)
+
+ _, err = f.Write([]byte("222"))
+ assert.NoError(t, err)
+
+ _, err = f.Seek(int64(n), io.SeekStart)
+ assert.NoError(t, err)
+
+ _, err = f.Write([]byte("333"))
+ assert.NoError(t, err)
+
+ fr, err := OpenFile(db.DefaultContext, "test2.log", os.O_RDONLY)
+ assert.NoError(t, err)
+ defer f.Close()
+
+ buf, err := io.ReadAll(fr)
+ assert.NoError(t, err)
+ assert.EqualValues(t, "111333", string(buf))
+}
diff --git a/models/dbfs/main_test.go b/models/dbfs/main_test.go
new file mode 100644
index 0000000000..7a820b2d83
--- /dev/null
+++ b/models/dbfs/main_test.go
@@ -0,0 +1,23 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package dbfs
+
+import (
+ "path/filepath"
+ "testing"
+
+ "code.gitea.io/gitea/models/unittest"
+ "code.gitea.io/gitea/modules/setting"
+)
+
+func init() {
+ setting.SetCustomPathAndConf("", "", "")
+ setting.LoadForTest()
+}
+
+func TestMain(m *testing.M) {
+ unittest.MainTest(m, &unittest.TestOptions{
+ GiteaRootPath: filepath.Join("..", ".."),
+ })
+}
diff --git a/models/issues/comment.go b/models/issues/comment.go
index 91dc128277..9ad538fcc6 100644
--- a/models/issues/comment.go
+++ b/models/issues/comment.go
@@ -355,7 +355,7 @@ func (c *Comment) LoadPoster(ctx context.Context) (err error) {
return nil
}
- c.Poster, err = user_model.GetUserByID(ctx, c.PosterID)
+ c.Poster, err = user_model.GetPossibleUserByID(ctx, c.PosterID)
if err != nil {
if user_model.IsErrUserNotExist(err) {
c.PosterID = -1
diff --git a/models/issues/comment_list.go b/models/issues/comment_list.go
index 2b55bc212f..0411d44531 100644
--- a/models/issues/comment_list.go
+++ b/models/issues/comment_list.go
@@ -29,32 +29,13 @@ func (comments CommentList) LoadPosters(ctx context.Context) error {
return nil
}
- posterIDs := comments.getPosterIDs()
- posterMaps := make(map[int64]*user_model.User, len(posterIDs))
- left := len(posterIDs)
- for left > 0 {
- limit := db.DefaultMaxInSize
- if left < limit {
- limit = left
- }
- err := db.GetEngine(ctx).
- In("id", posterIDs[:limit]).
- Find(&posterMaps)
- if err != nil {
- return err
- }
- left -= limit
- posterIDs = posterIDs[limit:]
+ posterMaps, err := getPosters(ctx, comments.getPosterIDs())
+ if err != nil {
+ return err
}
for _, comment := range comments {
- if comment.PosterID <= 0 {
- continue
- }
- var ok bool
- if comment.Poster, ok = posterMaps[comment.PosterID]; !ok {
- comment.Poster = user_model.NewGhostUser()
- }
+ comment.Poster = getPoster(comment.PosterID, posterMaps)
}
return nil
}
diff --git a/models/issues/issue.go b/models/issues/issue.go
index 50c9b77119..78cac90052 100644
--- a/models/issues/issue.go
+++ b/models/issues/issue.go
@@ -235,7 +235,7 @@ func (issue *Issue) LoadLabels(ctx context.Context) (err error) {
// LoadPoster loads poster
func (issue *Issue) LoadPoster(ctx context.Context) (err error) {
if issue.Poster == nil {
- issue.Poster, err = user_model.GetUserByID(ctx, issue.PosterID)
+ issue.Poster, err = user_model.GetPossibleUserByID(ctx, issue.PosterID)
if err != nil {
issue.PosterID = -1
issue.Poster = user_model.NewGhostUser()
diff --git a/models/issues/issue_list.go b/models/issues/issue_list.go
index e22e48c0bb..6ddadd27ed 100644
--- a/models/issues/issue_list.go
+++ b/models/issues/issue_list.go
@@ -86,7 +86,18 @@ func (issues IssueList) loadPosters(ctx context.Context) error {
return nil
}
- posterIDs := issues.getPosterIDs()
+ posterMaps, err := getPosters(ctx, issues.getPosterIDs())
+ if err != nil {
+ return err
+ }
+
+ for _, issue := range issues {
+ issue.Poster = getPoster(issue.PosterID, posterMaps)
+ }
+ return nil
+}
+
+func getPosters(ctx context.Context, posterIDs []int64) (map[int64]*user_model.User, error) {
posterMaps := make(map[int64]*user_model.User, len(posterIDs))
left := len(posterIDs)
for left > 0 {
@@ -98,22 +109,26 @@ func (issues IssueList) loadPosters(ctx context.Context) error {
In("id", posterIDs[:limit]).
Find(&posterMaps)
if err != nil {
- return err
+ return nil, err
}
left -= limit
posterIDs = posterIDs[limit:]
}
+ return posterMaps, nil
+}
- for _, issue := range issues {
- if issue.PosterID <= 0 {
- continue
- }
- var ok bool
- if issue.Poster, ok = posterMaps[issue.PosterID]; !ok {
- issue.Poster = user_model.NewGhostUser()
- }
+func getPoster(posterID int64, posterMaps map[int64]*user_model.User) *user_model.User {
+ if posterID == user_model.ActionsUserID {
+ return user_model.NewActionsUser()
}
- return nil
+ if posterID <= 0 {
+ return nil
+ }
+ poster, ok := posterMaps[posterID]
+ if !ok {
+ return user_model.NewGhostUser()
+ }
+ return poster
}
func (issues IssueList) getIssueIDs() []int64 {
diff --git a/models/issues/pull.go b/models/issues/pull.go
index 93b227f3fd..6ff6502e4e 100644
--- a/models/issues/pull.go
+++ b/models/issues/pull.go
@@ -394,6 +394,11 @@ func (pr *PullRequest) IsAncestor() bool {
return pr.Status == PullRequestStatusAncestor
}
+// IsFromFork return true if this PR is from a fork.
+func (pr *PullRequest) IsFromFork() bool {
+ return pr.HeadRepoID != pr.BaseRepoID
+}
+
// SetMerged sets a pull request to merged and closes the corresponding issue
func (pr *PullRequest) SetMerged(ctx context.Context) (bool, error) {
if pr.HasMerged {
diff --git a/models/issues/review.go b/models/issues/review.go
index 7e1a39bb5b..fe123d7398 100644
--- a/models/issues/review.go
+++ b/models/issues/review.go
@@ -158,7 +158,7 @@ func (r *Review) LoadReviewer(ctx context.Context) (err error) {
if r.ReviewerID == 0 || r.Reviewer != nil {
return
}
- r.Reviewer, err = user_model.GetUserByID(ctx, r.ReviewerID)
+ r.Reviewer, err = user_model.GetPossibleUserByID(ctx, r.ReviewerID)
return err
}
diff --git a/models/migrations/migrations.go b/models/migrations/migrations.go
index 2058fcec0f..15600f057c 100644
--- a/models/migrations/migrations.go
+++ b/models/migrations/migrations.go
@@ -453,6 +453,8 @@ var migrations = []Migration{
NewMigration("Add updated unix to LFSMetaObject", v1_19.AddUpdatedUnixToLFSMetaObject),
// v239 -> v240
NewMigration("Add scope for access_token", v1_19.AddScopeForAccessTokens),
+ // v240 -> v241
+ NewMigration("Add actions tables", v1_19.AddActionsTables),
}
// GetCurrentDBVersion returns the current db version
diff --git a/models/migrations/v1_19/v240.go b/models/migrations/v1_19/v240.go
new file mode 100644
index 0000000000..4505f86299
--- /dev/null
+++ b/models/migrations/v1_19/v240.go
@@ -0,0 +1,176 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package v1_19 //nolint
+
+import (
+ "code.gitea.io/gitea/models/db"
+ "code.gitea.io/gitea/modules/timeutil"
+
+ "xorm.io/xorm"
+)
+
+func AddActionsTables(x *xorm.Engine) error {
+ type ActionRunner struct {
+ ID int64
+ UUID string `xorm:"CHAR(36) UNIQUE"`
+ Name string `xorm:"VARCHAR(255)"`
+ OwnerID int64 `xorm:"index"` // org level runner, 0 means system
+ RepoID int64 `xorm:"index"` // repo level runner, if orgid also is zero, then it's a global
+ Description string `xorm:"TEXT"`
+ Base int // 0 native 1 docker 2 virtual machine
+ RepoRange string // glob match which repositories could use this runner
+
+ Token string `xorm:"-"`
+ TokenHash string `xorm:"UNIQUE"` // sha256 of token
+ TokenSalt string
+ // TokenLastEight string `xorm:"token_last_eight"` // it's unnecessary because we don't find runners by token
+
+ LastOnline timeutil.TimeStamp `xorm:"index"`
+ LastActive timeutil.TimeStamp `xorm:"index"`
+
+ // Store OS and Artch.
+ AgentLabels []string
+ // Store custom labes use defined.
+ CustomLabels []string
+
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated"`
+ Deleted timeutil.TimeStamp `xorm:"deleted"`
+ }
+
+ type ActionRunnerToken struct {
+ ID int64
+ Token string `xorm:"UNIQUE"`
+ OwnerID int64 `xorm:"index"` // org level runner, 0 means system
+ RepoID int64 `xorm:"index"` // repo level runner, if orgid also is zero, then it's a global
+ IsActive bool
+
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated"`
+ Deleted timeutil.TimeStamp `xorm:"deleted"`
+ }
+
+ type ActionRun struct {
+ ID int64
+ Title string
+ RepoID int64 `xorm:"index unique(repo_index)"`
+ OwnerID int64 `xorm:"index"`
+ WorkflowID string `xorm:"index"` // the name of workflow file
+ Index int64 `xorm:"index unique(repo_index)"` // a unique number for each run of a repository
+ TriggerUserID int64
+ Ref string
+ CommitSHA string
+ Event string
+ IsForkPullRequest bool
+ EventPayload string `xorm:"LONGTEXT"`
+ Status int `xorm:"index"`
+ Started timeutil.TimeStamp
+ Stopped timeutil.TimeStamp
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated"`
+ }
+
+ type ActionRunJob struct {
+ ID int64
+ RunID int64 `xorm:"index"`
+ RepoID int64 `xorm:"index"`
+ OwnerID int64 `xorm:"index"`
+ CommitSHA string `xorm:"index"`
+ IsForkPullRequest bool
+ Name string `xorm:"VARCHAR(255)"`
+ Attempt int64
+ WorkflowPayload []byte
+ JobID string `xorm:"VARCHAR(255)"` // job id in workflow, not job's id
+ Needs []string `xorm:"JSON TEXT"`
+ RunsOn []string `xorm:"JSON TEXT"`
+ TaskID int64 // the latest task of the job
+ Status int `xorm:"index"`
+ Started timeutil.TimeStamp
+ Stopped timeutil.TimeStamp
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated index"`
+ }
+
+ type Repository struct {
+ NumActionRuns int `xorm:"NOT NULL DEFAULT 0"`
+ NumClosedActionRuns int `xorm:"NOT NULL DEFAULT 0"`
+ }
+
+ type ActionRunIndex db.ResourceIndex
+
+ type ActionTask struct {
+ ID int64
+ JobID int64
+ Attempt int64
+ RunnerID int64 `xorm:"index"`
+ Status int `xorm:"index"`
+ Started timeutil.TimeStamp `xorm:"index"`
+ Stopped timeutil.TimeStamp
+
+ RepoID int64 `xorm:"index"`
+ OwnerID int64 `xorm:"index"`
+ CommitSHA string `xorm:"index"`
+ IsForkPullRequest bool
+
+ TokenHash string `xorm:"UNIQUE"` // sha256 of token
+ TokenSalt string
+ TokenLastEight string `xorm:"index token_last_eight"`
+
+ LogFilename string // file name of log
+ LogInStorage bool // read log from database or from storage
+ LogLength int64 // lines count
+ LogSize int64 // blob size
+ LogIndexes []int64 `xorm:"LONGBLOB"` // line number to offset
+ LogExpired bool // files that are too old will be deleted
+
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated index"`
+ }
+
+ type ActionTaskStep struct {
+ ID int64
+ Name string `xorm:"VARCHAR(255)"`
+ TaskID int64 `xorm:"index unique(task_index)"`
+ Index int64 `xorm:"index unique(task_index)"`
+ RepoID int64 `xorm:"index"`
+ Status int `xorm:"index"`
+ LogIndex int64
+ LogLength int64
+ Started timeutil.TimeStamp
+ Stopped timeutil.TimeStamp
+ Created timeutil.TimeStamp `xorm:"created"`
+ Updated timeutil.TimeStamp `xorm:"updated"`
+ }
+
+ type dbfsMeta struct {
+ ID int64 `xorm:"pk autoincr"`
+ FullPath string `xorm:"VARCHAR(500) UNIQUE NOT NULL"`
+ BlockSize int64 `xorm:"BIGINT NOT NULL"`
+ FileSize int64 `xorm:"BIGINT NOT NULL"`
+ CreateTimestamp int64 `xorm:"BIGINT NOT NULL"`
+ ModifyTimestamp int64 `xorm:"BIGINT NOT NULL"`
+ }
+
+ type dbfsData struct {
+ ID int64 `xorm:"pk autoincr"`
+ Revision int64 `xorm:"BIGINT NOT NULL"`
+ MetaID int64 `xorm:"BIGINT index(meta_offset) NOT NULL"`
+ BlobOffset int64 `xorm:"BIGINT index(meta_offset) NOT NULL"`
+ BlobSize int64 `xorm:"BIGINT NOT NULL"`
+ BlobData []byte `xorm:"BLOB NOT NULL"`
+ }
+
+ return x.Sync(
+ new(ActionRunner),
+ new(ActionRunnerToken),
+ new(ActionRun),
+ new(ActionRunJob),
+ new(Repository),
+ new(ActionRunIndex),
+ new(ActionTask),
+ new(ActionTaskStep),
+ new(dbfsMeta),
+ new(dbfsData),
+ )
+}
diff --git a/models/repo.go b/models/repo.go
index e95887077c..38dc3f1ab1 100644
--- a/models/repo.go
+++ b/models/repo.go
@@ -11,6 +11,7 @@ import (
_ "image/jpeg" // Needed for jpeg support
+ actions_model "code.gitea.io/gitea/models/actions"
activities_model "code.gitea.io/gitea/models/activities"
admin_model "code.gitea.io/gitea/models/admin"
asymkey_model "code.gitea.io/gitea/models/asymkey"
@@ -26,6 +27,7 @@ import (
"code.gitea.io/gitea/models/unit"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/models/webhook"
+ actions_module "code.gitea.io/gitea/modules/actions"
"code.gitea.io/gitea/modules/lfs"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/storage"
@@ -52,6 +54,12 @@ func DeleteRepository(doer *user_model.User, uid, repoID int64) error {
defer committer.Close()
sess := db.GetEngine(ctx)
+ // Query the action tasks of this repo, they will be needed after they have been deleted to remove the logs
+ tasks, err := actions_model.FindTasks(ctx, actions_model.FindTaskOptions{RepoID: repoID})
+ if err != nil {
+ return fmt.Errorf("find actions tasks of repo %v: %w", repoID, err)
+ }
+
// In case is a organization.
org, err := user_model.GetUserByID(ctx, uid)
if err != nil {
@@ -152,6 +160,11 @@ func DeleteRepository(doer *user_model.User, uid, repoID int64) error {
&repo_model.Watch{RepoID: repoID},
&webhook.Webhook{RepoID: repoID},
&secret_model.Secret{RepoID: repoID},
+ &actions_model.ActionTaskStep{RepoID: repoID},
+ &actions_model.ActionTask{RepoID: repoID},
+ &actions_model.ActionRunJob{RepoID: repoID},
+ &actions_model.ActionRun{RepoID: repoID},
+ &actions_model.ActionRunner{RepoID: repoID},
); err != nil {
return fmt.Errorf("deleteBeans: %w", err)
}
@@ -315,6 +328,15 @@ func DeleteRepository(doer *user_model.User, uid, repoID int64) error {
}
}
+ // Finally, delete action logs after the actions have already been deleted to avoid new log files
+ for _, task := range tasks {
+ err := actions_module.RemoveLogs(ctx, task.LogInStorage, task.LogFilename)
+ if err != nil {
+ log.Error("remove log file %q: %v", task.LogFilename, err)
+ // go on
+ }
+ }
+
return nil
}
diff --git a/models/repo/repo.go b/models/repo/repo.go
index e5e1ac43b4..831eb22dc5 100644
--- a/models/repo/repo.go
+++ b/models/repo/repo.go
@@ -141,6 +141,9 @@ type Repository struct {
NumProjects int `xorm:"NOT NULL DEFAULT 0"`
NumClosedProjects int `xorm:"NOT NULL DEFAULT 0"`
NumOpenProjects int `xorm:"-"`
+ NumActionRuns int `xorm:"NOT NULL DEFAULT 0"`
+ NumClosedActionRuns int `xorm:"NOT NULL DEFAULT 0"`
+ NumOpenActionRuns int `xorm:"-"`
IsPrivate bool `xorm:"INDEX"`
IsEmpty bool `xorm:"INDEX"`
@@ -233,6 +236,7 @@ func (repo *Repository) AfterLoad() {
repo.NumOpenPulls = repo.NumPulls - repo.NumClosedPulls
repo.NumOpenMilestones = repo.NumMilestones - repo.NumClosedMilestones
repo.NumOpenProjects = repo.NumProjects - repo.NumClosedProjects
+ repo.NumOpenActionRuns = repo.NumActionRuns - repo.NumClosedActionRuns
}
// LoadAttributes loads attributes of the repository.
diff --git a/models/repo/repo_unit.go b/models/repo/repo_unit.go
index e20d03e2c5..ee450a46c4 100644
--- a/models/repo/repo_unit.go
+++ b/models/repo/repo_unit.go
@@ -174,7 +174,7 @@ func (r *RepoUnit) BeforeSet(colName string, val xorm.Cell) {
r.Config = new(PullRequestsConfig)
case unit.TypeIssues:
r.Config = new(IssuesConfig)
- case unit.TypeCode, unit.TypeReleases, unit.TypeWiki, unit.TypeProjects, unit.TypePackages:
+ case unit.TypeCode, unit.TypeReleases, unit.TypeWiki, unit.TypeProjects, unit.TypePackages, unit.TypeActions:
fallthrough
default:
r.Config = new(UnitConfig)
diff --git a/models/unit/unit.go b/models/unit/unit.go
index c4743dbdb4..bcd0572ab9 100644
--- a/models/unit/unit.go
+++ b/models/unit/unit.go
@@ -27,6 +27,7 @@ const (
TypeExternalTracker // 7 ExternalTracker
TypeProjects // 8 Kanban board
TypePackages // 9 Packages
+ TypeActions // 10 Actions
)
// Value returns integer value for unit type
@@ -54,6 +55,8 @@ func (u Type) String() string {
return "TypeProjects"
case TypePackages:
return "TypePackages"
+ case TypeActions:
+ return "TypeActions"
}
return fmt.Sprintf("Unknown Type %d", u)
}
@@ -77,6 +80,7 @@ var (
TypeExternalTracker,
TypeProjects,
TypePackages,
+ TypeActions,
}
// DefaultRepoUnits contains the default unit types
@@ -288,6 +292,15 @@ var (
perm.AccessModeRead,
}
+ UnitActions = Unit{
+ TypeActions,
+ "actions.actions",
+ "/actions",
+ "actions.unit.desc",
+ 7,
+ perm.AccessModeOwner,
+ }
+
// Units contains all the units
Units = map[Type]Unit{
TypeCode: UnitCode,
@@ -299,6 +312,7 @@ var (
TypeExternalWiki: UnitExternalWiki,
TypeProjects: UnitProjects,
TypePackages: UnitPackages,
+ TypeActions: UnitActions,
}
)
diff --git a/models/unittest/testdb.go b/models/unittest/testdb.go
index f3127006b8..7e327f2bd2 100644
--- a/models/unittest/testdb.go
+++ b/models/unittest/testdb.go
@@ -104,6 +104,8 @@ func MainTest(m *testing.M, testOpts *TestOptions) {
setting.Packages.Storage.Path = filepath.Join(setting.AppDataPath, "packages")
+ setting.Actions.Storage.Path = filepath.Join(setting.AppDataPath, "actions_log")
+
setting.Git.HomePath = filepath.Join(setting.AppDataPath, "home")
setting.IncomingEmail.ReplyToAddress = "incoming+%{token}@localhost"
diff --git a/models/user/user.go b/models/user/user.go
index a2c54a4429..0917bea754 100644
--- a/models/user/user.go
+++ b/models/user/user.go
@@ -559,32 +559,6 @@ func GetUserSalt() (string, error) {
return hex.EncodeToString(rBytes), nil
}
-// NewGhostUser creates and returns a fake user for someone has deleted their account.
-func NewGhostUser() *User {
- return &User{
- ID: -1,
- Name: "Ghost",
- LowerName: "ghost",
- }
-}
-
-// NewReplaceUser creates and returns a fake user for external user
-func NewReplaceUser(name string) *User {
- return &User{
- ID: -1,
- Name: name,
- LowerName: strings.ToLower(name),
- }
-}
-
-// IsGhost check if user is fake user for a deleted account
-func (u *User) IsGhost() bool {
- if u == nil {
- return false
- }
- return u.ID == -1 && u.Name == "Ghost"
-}
-
var (
reservedUsernames = []string{
".",
@@ -622,6 +596,7 @@ var (
"swagger.v1.json",
"user",
"v2",
+ "gitea-actions",
}
reservedUserPatterns = []string{"*.keys", "*.gpg", "*.rss", "*.atom"}
@@ -1013,6 +988,20 @@ func GetUserByID(ctx context.Context, id int64) (*User, error) {
return u, nil
}
+// GetPossibleUserByID returns the user if id > 0 or return system usrs if id < 0
+func GetPossibleUserByID(ctx context.Context, id int64) (*User, error) {
+ switch id {
+ case -1:
+ return NewGhostUser(), nil
+ case ActionsUserID:
+ return NewActionsUser(), nil
+ case 0:
+ return nil, ErrUserNotExist{}
+ default:
+ return GetUserByID(ctx, id)
+ }
+}
+
// GetUserByNameCtx returns user by given name.
func GetUserByName(ctx context.Context, name string) (*User, error) {
if len(name) == 0 {
diff --git a/models/user/user_system.go b/models/user/user_system.go
new file mode 100644
index 0000000000..f54f4e3ffb
--- /dev/null
+++ b/models/user/user_system.go
@@ -0,0 +1,64 @@
+// Copyright 2022 The Gitea Authors. All rights reserved.
+// SPDX-License-Identifier: MIT
+
+package user
+
+import (
+ "strings"
+
+ "code.gitea.io/gitea/modules/structs"
+)
+
+// NewGhostUser creates and returns a fake user for someone has deleted their account.
+func NewGhostUser() *User {
+ return &User{
+ ID: -1,
+ Name: "Ghost",
+ LowerName: "ghost",
+ }
+}
+
+// IsGhost check if user is fake user for a deleted account
+func (u *User) IsGhost() bool {
+ if u == nil {
+ return false
+ }
+ return u.ID == -1 && u.Name == "Ghost"
+}
+
+// NewReplaceUser creates and returns a fake user for external user
+func NewReplaceUser(name string) *User {
+ return &User{
+ ID: -1,
+ Name: name,
+ LowerName: strings.ToLower(name),
+ }
+}
+
+const (
+ ActionsUserID = -2
+ ActionsUserName = "gitea-actions"
+ ActionsFullName = "Gitea Actions"
+ ActionsEmail = "teabot@gitea.io"
+)
+
+// NewActionsUser creates and returns a fake user for running the actions.
+func NewActionsUser() *User {
+ return &User{
+ ID: ActionsUserID,
+ Name: ActionsUserName,
+ LowerName: ActionsUserName,
+ IsActive: true,
+ FullName: ActionsFullName,
+ Email: ActionsEmail,
+ KeepEmailPrivate: true,
+ LoginName: ActionsUserName,
+ Type: UserTypeIndividual,
+ AllowCreateOrganization: true,
+ Visibility: structs.VisibleTypePublic,
+ }
+}
+
+func (u *User) IsActions() bool {
+ return u != nil && u.ID == ActionsUserID
+}