• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle





  • So, basically, the trick to setting this up in Caddy is more one of not doing anything. Caddy is so much smarter than Nginx that it just figures out all this stuff for you.

    So this:

    # Notes Server - With WebSocket
    server {
        listen 80;
        server_name notes.domain.com;
        return 301 https://$host$request_uri;
    }
    
    server {
        listen 443 ssl;
        server_name notes.domain.com;
    
        ssl_certificate /etc/letsencrypt/live/notes.domain.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/notes.domain.com/privkey.pem;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    
        location / {
            proxy_pass http://localhost:5264/;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
            proxy_read_timeout 3600;
            proxy_send_timeout 3600;
        }
    }
    

    in Caddy becomes this:

    auth.domain.com {
            reverse_proxy IP_ADDRESS:8264
    }
    

    Yeah. This is why I love Caddy.

    In the end I only had to include a couple of the header modifiers to get everything working. So my finished file looked like this:

    auth.domain.com {
            reverse_proxy IP_ADDRESS:8264 {
                    header_up Host $host
                    header_up X-Real-IP $remote_addr
            }
    }
    
    notes.domain.com {
            reverse_proxy IP_ADDRESS:5264
    }
    
    events.domain.com {
            reverse_proxy IP_ADDRESS:7264
    }
    
    mono.domain.com {
            reverse_proxy IP_ADDRESS:6264
            header / Cache-Control "public, no-transform"
            header / X-Cache-Status $upstream_cache_status
    }
    

    Obviously, update “domain.com” and “IP_ADDRESS” to the appropriate values. I’m actually not even 100% sure that all of that is necessary, but my setup seems to be working, including the monograph server.

    One very important aside though; in your .env file, don’t do this:

    AUTH_SERVER_PUBLIC_URL=https://auth.domain.com/
    NOTESNOOK_APP_PUBLIC_URL=https://notes.domain.com/
    MONOGRAPH_PUBLIC_URL=https://mono.domain.com/
    ATTACHMENTS_SERVER_PUBLIC_URL=https://files.domain.com/
    

    Those trailing slashes will mess everything up. Strip them off so it looks like this:

    AUTH_SERVER_PUBLIC_URL=https://auth.domain.com/
    NOTESNOOK_APP_PUBLIC_URL=https://notes.domain.com/
    MONOGRAPH_PUBLIC_URL=https://mono.domain.com/
    ATTACHMENTS_SERVER_PUBLIC_URL=https://files.domain.com/
    

    Took me a while to work that one out.

    I might still need to tweak some of this. I’m getting an occasional “Unknown network error” in the app, but all my notes are syncing, monographs publish just fine, and generally everything else seems to work, so I’m not entirely sure what the issue is that Notesnook is trying to tell me about, or if it’s even something I need to fix.

    Edit: OK, the issue was that I didn’t have files.domain.com setup. Just directly proxying it solves one error, but creates another, so I’ll need to play with that part a little more. It’s probably down to Minio doing it’s own proxying on the backend (because it rewrites http requests at 9009 to https at 9090). Will update when I get it working. Anyway, for now everything except attachments seem to work.



  • Noted, I’ll be giving that a proper read after work. Thank you.

    Edit to add: Yeah, that pretty much mirrors my own experiences of using AI as a coding aid. Even when I was learning a new language, I found that my comprehension of the material very quickly outstripped whatever ChatGPT could provide. I’d much rather understand what I’m building because I built it myself. A lot of the time, when you use a solution someone else provided you don’t find out until much later how badly that solution held you back because it wasn’t actually the best way to tackle the problem.



  • The issue is that AI is being invested in as if it can replace jobs. That’s not an issue for anyone who wants to use it as a spellchecker, but it is an issue for the economy, for society, and for the planet, because billions of dollars of computer hardware are being built and run on the assumption that trillions of dollars of payoff will be generated.

    And correcting someone’s tone in an email is not, and will never be, a trillion dollar industry.


  • I think these are actually valid examples, albeit ones that come with a really big caveat; you’re using AI in place of a skill that you really should be learning for yourself. As an autistic IT person, I get the struggle of communicating with non-technical and neurotypical people, especially clients who you have to be extra careful with. But the reality is, you can’t always do all your communication by email. If you always rely on the AI to correct your tone or simplify your language, you’re choosing not to build an essential skill that is every bit as important to doing your job well as it is to know how to correctly configure an ACL on a Cisco managed switch.

    That said, I can also see how relying on the AI at first can be a helpful learning tool as you build those skills. There’s certainly an argument that by using tools, but paying attention to the output of those tools, you build those skills for yourself. Learning by example works. I think used in that way, there’s potentially real value there.

    Which is kind of the broader story with Gen AI overall. It’s not that it can never be useful; it’s that, at best, it can only ever aspire to “useful.” No one, yet, has demonstrated any ability to make AI “essential” and the idea that we should be investing hundreds of billions of dollars into a technology that is, on its best days, mildly useful, is sheer fucking lunacy.




  • It’s also a testament to Altman’s dealmaking prowess: a progressive San Francisco tech leader walked into an administration that opposed everything he publicly stood for, and within days, he secured a crown.

    Bullshit. Absolute steaming mountains of bullshit.

    These techbros don’t stand for anything. They’re a bunch of objectivist libertarian assholes who enjoy the aesthetics of being perceived as progressive because it’s trendy in the kind of social spaces they want to inhabit.

    This isn’t about “dealmaking skills”, it’s about Trump being a giant baby who will do anything for anyone who kisses his ass, and Altman not giving a flying fuck about Trump being a fascist because sucking up to him makes him money.




  • Well, thanks to your guidance I was able to get my own server up and running. Converting the reverse proxy to Caddy was very easy, but then everything involving Caddy is stupidly easy. That also removed all the steps involving certs.

    I’m going to try leaving out the subdomain for the S3 storage. Notesnook doesn’t seem to require it in the setup, whereas the other four addresses are specifically requested, and I feel like it would be better for security to not have Minio directly accessible over the web.

    I also really want to try attaching their web app to this. They don’t seem to have their own docker release for it though, unless I missed something.


  • Hi, thank you so much for posting this. It’s a much better tutorial than the one provided by the Notesnook devs.

    With that being said, I think it would be really helpful to have a bit more of a breakdown of what these individual components are doing and why. For example, what is the actual purpose of strapping a Monograph server onto this stack? Is that needed for the core Notesnook server to work, or is it optional? Does it have to be accessible over the web or could we leave that as a local access only component? Same questions for the S3 storage. Similarly, it would be good to get a better understanding of what the relationship is between the identity server and the main server. Why do both those components have to be web accessible at different subdomains?

    This sort of information is especially helpful to anyone trying to adapt your process; for example, if they’re using a different reverse proxy, or if they wanted to swap in a different storage back-end.

    Anyway, thanks again for all the time you put into this, it is really helpful.