Cloud Backups Error - State 3

OS Name/Version:
AlmaLinux 8.6

Product Name/Version: (Always use the full version number - not ‘Latest’)
AMP v2.4.0.8, built 10/10/2022 21:18

Problem Description:
I have been trying to use the Cloud Backups function but I can’t seem to get it working. I keep getting the same error every time:

This task could not be completed: Uploading Backup to S3 - - State: 3.

Steps to repeat:

  • Add the required details in the Cloud Backup section
  • Create a backup
  • Press the Upload to S3 button

Actions taken to resolve so far:

  • Installing Fuse and RClone ← I think this is most likely where the problem is, Alma is pretty minimal, so it is probably just missing something.
  • Changing storage providers, Backblaze B2 and Wasabi. Both are S3 compatible.
  • Rotating credentials
  • Rotating bucket names
  • Using Bucket ID’s instead of name
  • Searched online & in the forum
  • Searched through the logs (what I could find at least)
  • I have independently confirmed the credentials can access the required buckets

Have you got the URL correct?

I’m also getting this error. I thought it would be best to add here instead of creating a whole new thread.

Using TrueNAS Scale S3 via MinIO.

I’ve used both internal (http://10.6.4.112:9000) and external (https://s3.atbhosts.com) with the same result.

  • Unlike the OP I’m not using rotating creds or buckets.
  • Like the OP I have confirmed that everything works and is accessible via WebGUI and S3 Browser.
  • Also created a test S3 via MinIO on my portainer, and got the same result

Have you checked the logs for the AMP instance you’re doing this on?

Adding my name to this issue. The URL is correct (s3.us-west-004.backblazeb2.com), I’ve tried it with and without the https. Debug logs provide no helpful information, just
[15:40:52] [RunningTasksManager:yamikaitou Debug] : Task Uploading Backup to S3 (Scheduled Backup) ended: Acknowledged.

One thing I see in B2’s guide is about the signature needing to be v4 instead of v2, not sure what AMP is using

AMP is using the official Amazon S3 libraries, it may be worth checking if Amazons own S3 command line tools are able to connect.

Will try the CLI, but tried using AMP again with AWS S3 and it worked

[15:31:02] [API:yamikaitou Activity] : Changing setting LocalFileBackupPlugin.CloudStorageSettings.S3ServiceURL to https://.s3.amazonaws.com/
[15:31:50] [API:yamikaitou Activity] : Changing setting LocalFileBackupPlugin.CloudStorageSettings.S3ServiceURL to https://s3.us-east-1.amazonaws.com/
[15:34:27] [API:yamikaitou Activity] : Protected setting LocalFileBackupPlugin.CloudStorageSettings.S3AccessKey changed.
[15:34:31] [API:yamikaitou Activity] : Protected setting LocalFileBackupPlugin.CloudStorageSettings.S3SecretKey changed.
[15:34:46] [API:yamikaitou Activity] : Changing setting LocalFileBackupPlugin.CloudStorageSettings.S3BucketName to hoenn-backup
[15:37:34] [RunningTasksManager:yamikaitou Debug] : Task Uploading Backup to S3 (Scheduled Backup) ended: Finished
[15:42:27] [API:yamikaitou Activity] : Changing setting LocalFileBackupPlugin.CloudStorageSettings.S3ServiceURL to https://s3.us-west-004.backblazeb2.com/
[15:42:30] [API:yamikaitou Activity] : Changing setting LocalFileBackupPlugin.CloudStorageSettings.S3BucketName to hoenn-beldum
[15:43:06] [API:yamikaitou Activity] : Protected setting LocalFileBackupPlugin.CloudStorageSettings.S3AccessKey changed.
[15:43:09] [API:yamikaitou Activity] : Protected setting LocalFileBackupPlugin.CloudStorageSettings.S3SecretKey changed.
[15:43:24] [RunningTasksManager:yamikaitou Debug] : Task Uploading Backup to S3 (Scheduled Backup) ended: Acknowledged

Using the AWS CLI, I am able to successfully upload to Backblaze

  CubeCoders AMP  💻amp@hoenn  📁~/.ampdata/instances/beldum/Backups  aws --endpoint-url https://s3.us-west-004.backblazeb2.com/ s3 cp 20221017-181609-30ffb88d3692444a986b55f4a3f44d1a.zip s3://hoenn-beldum
upload: ./20221017-181609-30ffb88d3692444a986b55f4a3f44d1a.zip to s3://hoenn-beldum/20221017-181609-30ffb88d3692444a986b55f4a3f44d1a.zip

And show me the config page in AMP?

I get the same results as YamiKaitou from my tests. MinIO is Amazon S3 v2 and v4 compatible, so it should work from what I understand.

Looks like not all S3 providers support the standard list of ACLs that Amazon do, so I’ve changed it for the next update.

4 Likes

Using the exact same config/setup as Atifex here.
TrueNAS Scale S3 via MinIO, verified S3 is accessible via WebUI and S3 Browser.
Running into the same error: “This task could not be completed: Uploading Backup to S3 - - State: 3.”

Good morning (or afternoon) everyone.

New AMP User here, but I wanted to chime in on what I’ve been able to accomplish towards getting MinIO working with AMP. My set up below is now working fine but there will be more to come.

And thank you to the people testing and posting as well as Mike for updating AMP to help the situation.

I’ve tested on two platforms, and managed to get one working.

  • Platform 1 : TrueNAS SCALE using Minio - I’ve tried the ixSystems App, the TrueCharts App, and manually using k8s configuration files. I’ve also tried various combinations of nodePort, loadBalancer, and Ingress (using Traefik) No TLS to keep things simple. No success (but more on why later) It always gets to an error saying “The request signature we calculated does not match the signature you provided. Check your key and signing method.” or a DNS error “A WebException with status NameResolutionFailure was thrown.” depending on the URL used to access MinIO.

  • Platform 2 : Synology NAS running DSM 7.1 - This was a more manual set up to use the built in Docker support to run MinIO. Again no TLS to keep things simple. Initially no luck with similar errors.

Note : The AWS SDK worked every time.

After beating my head up against the wall and staring at the AMP configuration dialogs, I came to the realization that AMP is looking for DNS Style buckets. For those unfamiliar, AWS can use path style buckets (http://server.adress/ or DNS style buckets (http://.server.adress/). MinIO uses the former by default and from what I can see, that won’t work with AMP.

As it turns out, the latter DNS style is working just fine. It just took some configuration. Here’s my set up for those curious.

For now I’m on the Synology using Docker b/c it was easier than dealing with Kubernetes / Traefik, but I do plan to attempt to move the setup back to TrueNAS as we’re getting rid of the Synology eventually.

Relevant configurations…

  • MinIO startup command - server /data --console-address “:11090” --address “:11000” (Note the use of the “address” parameter here to set the API address to the correct port. It’s not something that’s well documented by MinIO)
  • Relevant MinIO Environment variables
  • Docker ports
    • TCP 11000 → 11000 (API)
    • TCP 11090 → 11090 (Console)
  • DNS Set up - Wildcard DNS Entry created - *.server.mynetwork.com pointing to the server IP. This is critical!)

The MINIO_DOMAIN enables the DNS style routing. That and the wildcard DNS entry is the key to make it work.

I created a user in MinIO called amp and assigned the “readwrite” policy. I then created a service account for that user. I then created a bucket. By default in MinIO, the bucket I created now has access to all buckets in unless the bucket has special permissions assigned. Note that my buckets are still set to “private”.

AMP configuration -

  • Use S3 Storage for Backups - Enabled
  • S3 Service URL - http://server.mynetwork.com:11000 (this should point to the MinIO API URL and not the console)
  • S3 Bucket Name -
  • S3 Access Key - Given when you create the service account under the AMP user
  • S3 Secret Key - Given when you create the service account under the AMP user

And that’s it. MinIO is working perfectly.

Lessons learned

  • DNS Style bucket names must work
  • MinIO is touchy about what it thinks the host name is so watch your redirect URLs and such. I would also recommend you set the API and console ports using the command line when you start the container instead of trying to redirect ports, then you can do a 1:1 port mapping.

My future plan is to move this set up to TrueNAS SCALE, possibly using SSL but we use an internal CA so that might get messy. (AMP’s docker container probably won’t recognize my internal CA without me adding the CA certificates and I’m not sure that’s possible yet for a containerized game server set up like I’m running)

Thanks for reading and I hope this helps others get their MinIO set up working.

Ed P

1 Like

Quick Update, I was able to get this working on TrueNAS SCALE.

Notes -

  • I routed all inbound through Traefik and didn’t have to resort to nodePort or loadBalancer type configurations.
    • I created 3 IngressRoutes in Traefik - 1 Host rule that routes directly to the console, 1 Host rule that routes directly to the API port, and 1 HostRegexp rule that routes a wildcard to the API to handle the DNS based bucket names. I also added the “passHostHeader” to make sure the headers get passed to MinIO. I think that’s a default but I specified it and it works so I’m leaving it. :rofl:
  • I found that the ixSystems and TrueCharts deployments couldn’t meet my configuration needs with respect to Traefik configuration so I did have to do the deployment through YAML files. I might be wrong about this one but I had to use the HostRegexp rule in Traefik to get the wildcards working right.
  • This configuration should work with TLS if the AMP container recognizes your CA. I’m not sure if mine will and I don’t have time to mess with it today so I’ll wait. For now, my Minecraft server is backing up to S3 on TrueNAS and I’m happy. :slight_smile:

If anyone else is interested and needs help getting theirs set up just let me know. I’m happy to share config files, I just didn’t want to spam this post.

do you lose access to web console after redirecting ? im now unable to login via the web console but s3 backups work via amp

Edit: apparently the docker container was not resolving the dns name i must have missed that now it is fixed , thanks for your guide it helped alot

Glad to hear it helped!

I learned a lot about S3 buckets that day. lol