S3 Files

(allthingsdistributed.com)

152 points | by werner 3 hours ago

24 comments

  • MontyCarloHall 2 hours ago
    This is essentially S3FS using EFS (AWS's managed NFS service) as a cache layer for active data and small random accesses. Unfortunately, this also means that it comes with some of EFS's eye-watering pricing:

    — All writes cost $0.06/GB, since everything is first written to the EFS cache. For write-heavy applications, this could be a dealbreaker.

    — Reads hitting the cache get billed at $0.03/GB. Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free.

    — Cache is charged at $0.30/GB/month. Even though everything is written to the cache (for consistency purposes), it seems like it's only used for persistent storage of small files (<128kB), so this shouldn't cost too much.

    • the8472 39 minutes ago
      > Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free.

      Always uncached? S3 has pretty bad latency.

  • rdtsc 2 hours ago
    Synchronization bits is what I was wondering about: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-fil...

    > For example, suppose you edit /mnt/s3files/report.csv through the file system. Before S3 Files synchronizes your changes back to the S3 bucket, another application uploads a new version of report.csv directly to the S3 bucket. When S3 Files detects the conflict, it moves your version of report.csv to the lost and found directory and replaces it with the version from the S3 bucket.

    > The lost and found directory is located in your file system's root directory under the name .s3files-lost+found-file-system-id.

  • jitl 1 hour ago
    I wish they offered some managed bridging to local NVMe storage. AWS NVMe is super fast compared to EBS, and EBS (node-exclusive access as block device) is faster than EFS (multi-node access). I imagine this can go fast if you put some kind of further-cache-to-NVMe FS on top, but a completely vertically integrated option would be much better.
  • nyc_pizzadev 1 hour ago
    This is very close to its first official release: https://fiberfs.io/

    Built in cache, CDN compatible, JSON metadata, concurrency safe and it targets all S3 compatible storage systems.

  • wbl 44 minutes ago
    "NFS provides the semantics your applications expect" is one of the funniest things I have ever read.
  • miguel_martin 36 minutes ago
    Dumb Q: what would happen if you used this to store a SQLite database? Would it just... work?

    My guess is this would only enable a read-replica and not backups as Litestream currently does?

  • gonzalohm 2 hours ago
    I cannot 100% confirm this, but I believe AWS insisted a lot in NOT using S3 as a file system. Why the change now?
    • yandie 2 hours ago
      It appears that they put an actual file system in front of S3 (AWS EFS basically) and then perform transparent syncing. The blog post discusses a lot of caveats (consistency, for example) or object namings (incosistencies are emitted as events to customers).

      Having been a fan of S3 for such a long time, I'm really a fan of the design. It's a good compromise and kudos to whoever managed to push through the design.

    • PunchyHamster 2 hours ago
      Because people will use it as filesystem regardless of the original intent because it is very convenient abstraction. So might as well do it in optimal and supported way I guess ?
    • LazyMans 2 hours ago
      They found a way to make money on it by putting a cache in front of it. Less load for them, better performance for you. Maybe you save money, maybe you dont.
    • jitl 1 hour ago
      Because without significant engineering effort (see the blog post), the mismatch between object store semantics and file semantics mean you will probably Have A Bad Time. In much earlier eras of S3, there were also some implementation specifics like throughput limits based on key prefixes (that one vanished circa 2016) that made it even worse to use for hierarchical directory shapes.
  • dang 50 minutes ago
    Since this is the thread that got attention, I've added the announcement link to the toptext and made the title work for both.
  • nvartolomei 2 hours ago
    > changes are aggregated and committed back to S3 roughly every 60 seconds as a single PUT

    Single PUT per file I assume?

    • LazyMans 2 hours ago
      Based on docs, correct.
  • koolba 1 hour ago
    If you though locking semantics over NFS were wonky, just wait till we through a remote S3 backend in the mix!
  • mbana 1 hour ago
    Werner Vogels is awesome. I first discovered about his writing when I learnt about Dynamo DB.
  • Centigonal 10 minutes ago
    Terrible day for people who sloppily use filesystem vocabulary when referring to S3 objects and prefixes.
  • mgaunard 3 hours ago
    Zero mention of s3fs which already did this for decades.
    • huntaub 2 hours ago
      This is pretty different than s3fs. s3fs is a FUSE file system that is backed by S3.

      This means that all of the non-atomic operations that you might want to do on S3 (including edits to the middle of files, renames, etc) are run on the machine running S3fs. As a result, if your machine crashes, it's not clear what's going to show up in your S3 bucket or if would corrupt things.

      As a result, S3fs is also slow because it means that the next stop after your machine is S3, which isn't suitable for many file-based applications.

      What AWS has built here is different, using EFS as the middle layer means that there's a safe, durable place for your file system operations to go while they're being assembled in object operations. It also means that the performance should be much better than s3fs (it's talking to ssds where data is 1ms away instead of hdds where data is 30ms away).

      • ChocolateGod 2 hours ago
        You can also use something like JuiceFS to make using S3 as a shared filesystem more sane, but you're moving all the metadata to a shared database.
    • luke5441 2 hours ago
      A more solid (especially when it comes to caching) solution would be appreciated.

      I thought that would be their https://github.com/awslabs/mountpoint-s3 . But no mention about this one either.

      S3 files does have the advantage of having a "shared" cache via EFS, but then that would probably also make the cache slower.

      • PunchyHamster 2 hours ago
        I'd assume you can still have local cache in addition to that.
    • rowanG077 2 hours ago
      I was thinking: "No way this has existed for decades". But the earliest I can find it existing is 2008. Strictly speaking not decades but much closer to it than I expected.
  • PunchyHamster 2 hours ago
    Eagerly awaiting on first blogpost where developers didn't read the eventually consistent part, lost the data and made some "genius" workaround with help of the LLM that got them in that spot in the first place
  • up2isomorphism 1 hour ago
    This why today’s sales pitch are often disguised as a tech blog.
  • themafia 3 hours ago
    > we locked a bunch of our most senior engineers in a room and said we weren’t going to let them out till they had a plan that they all liked.

    That's one way to do it.

    > When you create or modify files, changes are aggregated and committed back to S3 roughly every 60 seconds as a single PUT. Sync runs in both directions, so when other applications modify objects in the bucket, S3 Files automatically spots those modifications and reflects them in the filesystem view automatically.

    That sounds about right given the above. I have trouble seeing this as something other than a giant "hack." I already don't enjoy projecting costs for new types of S3 access patterns and I feel like has the potential to double the complication I already experience here.

    Maybe I'm too frugal, but I've been in the cloud for a decade now, and I've worked very hard to prevent any "surprise" bills from showing up. This seems like a great feature; if you don't care what your AWS bill is each month.

    • avereveard 3 hours ago
      There is a staggering number of user doing this with extra steps using fsx for lustre, their life greatly simplified today (unless they use gpu direct storage I guess)
      • themafia 2 hours ago
        Good point. There's a wide gulf between being able to design your workflow for S3 and trying to map an existing workflow to it.
  • gervwyk 2 hours ago
    any recommendations for a lambda based sftp sever setup?
  • goekjclo 3 hours ago
    the "under the hood uses EFS" part is the most interesting bit here
  • minutesmith 1 hour ago
    [flagged]
    • glenjamin 1 hour ago
      The way AWS keep their pricing section completely separate from their system and architecture docs, despite architecture being the primary driver of cost, is a major contributor to this
  • ovaistariq 2 hours ago
    TLDR: EFS as a eventually consistent cache in front of S3.
  • mritchie712 1 hour ago
    tldr: this caches your S3 data in EFS.

    we run datalakes using DuckLake and this sounds really useful. GCP should follow suit quickly.

    • anentropic 37 minutes ago
      I am curious about this use case

      How do you see it helping with DuckLake?

  • DenisM 3 hours ago
    TLDR: Eventually consistent file system view on top of s3 with read/write cache.
  • CrzyLngPwd 3 hours ago
    If there is ever a post that needs a TLDR or an AI summary it is that one.

    Sell the benefits.

    I have around 9 TB in 21m files on S3. How does this change benefit me?

    • dijksterhuis 2 hours ago
      not everything should or needs to be some article geared towards the audience's convenience, or selling something to the audience. pretty much all allthingsdistributed articles are long form articles covering highly technical systems and contain a decent whack of detail/context. in my mind, they veer closer to "computer scientist does blog posts" compared to "5 ways React can boost your page visits" listicles.

      edited slightly ... i really need to turn 10 minute post delay back on.

    • jz-amz 3 hours ago