Skip to content

Add workers :auto#3827

Merged
nateberkopec merged 3 commits intomainfrom
auto-dsl-workers
Jan 20, 2026
Merged

Add workers :auto#3827
nateberkopec merged 3 commits intomainfrom
auto-dsl-workers

Conversation

@nateberkopec
Copy link
Copy Markdown
Member

It's probably not that great that you MUST use the ENV var in order to get this neat behavior.

I noticed that this rounds down, which means a cpu quota of 512 will potentially put you into single mode (rounds 0.5 to 0). That's a bit weird but probably not hit often enough IRL to matter (and it's kinda what I would prefer to happen anyway).

Comment thread lib/puma/dsl.rb
# +Concurrent.available_processor_count+ (requires the concurrent-ruby gem).
# If available processor count is a Float (cpu quotas), we will round down.
#
# @note Cluster mode only.
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems wrong to me when workers 0 can be used to set single mode.

@github-actions github-actions Bot added the waiting-for-review Waiting on review from anyone label Nov 20, 2025
Copy link
Copy Markdown
Collaborator

@joshuay03 joshuay03 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's probably not that great that you MUST use the ENV var in order to get this neat behavior.

💯

LGTM overall. Just some feedback on doc changes.

Comment thread docs/deployment.md
Comment thread README.md
workers :auto
```

Note that threads are still used in cluster mode, and the `-t` thread flag setting is per worker, so `-w 2 -t 16:16` will spawn 32 threads in total, with 16 in each worker process.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The positioning of this line now seems awkward. It might read better before 'When using a config file ...'?

@github-actions github-actions Bot added waiting-for-merge and removed waiting-for-review Waiting on review from anyone labels Nov 21, 2025
Comment thread lib/puma/configuration.rb Outdated
return Integer(::Concurrent.available_processor_count)
end

Integer(value)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe update with the following:

      if value == :auto || value == 'auto'
        require_processor_counter
        Integer(::Concurrent.available_processor_count)
      else
        Integer(value)
      end

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nateberkopec

Sorry, that didn't turn out right. I think you know what I mean.

Have a good day...

@byroot
Copy link
Copy Markdown
Contributor

byroot commented Nov 26, 2025

Hum, not sure if you are aware, but I did implement .available_processors_count specifically to use inside Rails' puma.rb config. But turns out a bunch of providers don't expose the cgroups limits, so we had to revert: rails/rails#52522 / rails/rails@c68cea4

But perhaps this is outdated info? Did you test the common providers?

@dentarg
Copy link
Copy Markdown
Member

dentarg commented Jan 9, 2026

I noticed that this rounds down, which means a cpu quota of 512 will potentially put you into single mode (rounds 0.5 to 0). That's a bit weird but probably not hit often enough IRL to matter (and it's kinda what I would prefer to happen anyway).

Should we care about this?

...if single mode happens, and you have hooks for cluster mode, you will start to see warnings about before_worker_boot, so perhaps you remove them, but then later, :auto gets you multiple workers again, and you needed those hooks (disconnect database, etc.)

@nateberkopec
Copy link
Copy Markdown
Member Author

@byroot interesting, but this isn't a default ATM so I'm not worried about it not working on all platforms.

@dentarg That's almost more of an issue with how we do hooks probably than an issue with this :auto behavior I'd say? Worth calling out in docs though.

@nateberkopec nateberkopec merged commit 5d7d1dd into main Jan 20, 2026
167 of 170 checks passed
@nateberkopec nateberkopec deleted the auto-dsl-workers branch January 20, 2026 02:53
@dentarg
Copy link
Copy Markdown
Member

dentarg commented Jan 20, 2026

but this isn't a default ATM

Should it ever be?

@jjb
Copy link
Copy Markdown
Contributor

jjb commented Jan 20, 2026

some environments don't reveal the cpu share, and ::Concurrent.available_processor_count will show the host not the running container. on two i just checked:

cgroups v1

this is -1

cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us

cgroups v2

this is not present

cat /sys/fs/cgroup/cpu.max

puma could introspect this and conclude "my cores are being limited but i don't know what the limit is"

@nateberkopec
Copy link
Copy Markdown
Member Author

@dentarg Maybe, but then we'd have to do things like deal with @byroot's comment

@jjb just curious, which env, would help me if I could reproduce?

What would even be the correct behavior in that situation... deploy in single mode but warn? Bit of a dangerous situation.

@jjb
Copy link
Copy Markdown
Contributor

jjb commented Jan 21, 2026

which env, would help me if I could reproduce?

aptible and shipyard. i bet @bueller would be happy to give you a free environment to play with on shipyard

What would even be the correct behavior in that situation... deploy in single mode but warn? Bit of a dangerous situation.

in all environments except prod, definitely raise

in systems i've worked with, raising in prod is also fine. halts the deploy, user sees what error is and fixes it. but could also fall back to 1 process and still be in cluster mode (in case there are cluster-specific behaviors the user is expecting). and log an error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants