Linkace with SSL on a subdomain of my URL

Description of the Issue
I am trying to setup linkace using the advanced docker setup on a vps which has a local installation of nginx. I am trying to setup a subdomain such as “https://linkace.mydomain.dev”. I already have a wildcard cert for my domain and using the local install of nginx on the vps I already have other apps setup on their respective subdomains. This is the nginx conf I have for linkace on the local install of nginx:

I have replaced the name of my domain with “mydomain” for this post. But everything else is as it appears in the files.

server {
        listen 443;

        server_name linkace.mydomain.dev;

        ssl_certificate /etc/letsencrypt/live/mydomain.dev/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/mydomain.dev/privkey.pem;

        location / {
                proxy_pass http://localhost:9002;
        }
}

Besides the directions for modifying .env, I also modified:

APP_URL: http://linkace.mydomain.dev

For docker-compose.yml, I have the nginx config as:

...
  nginx:
    image: bitnami/nginx:1.19
    restart: unless-stopped
    ports:
      - "0.0.0.0:9002:8080"
      #- "0.0.0.0:9002:8443"
    depends_on:
      - app
    volumes:
      - linkace_app:/app
      - ./nginx.conf:/opt/bitnami/nginx/conf/server_blocks/linkace.conf:ro
      #- /etc/letsencrypt/live/mydomain.dev:/certs
...

And I have the nginx.conf as:

server {
    root /app/public;
    server_name _;
    index index.php;
    charset utf-8;
    client_max_body_size 20M;
    port_in_redirect off;

    # Choose the connection method
    listen 0.0.0.0:8080;
    #listen 0.0.0.0:8443 ssl;

    # Provide SSL certificates
    #ssl_certificate      /certs/fullchain.pem;
    #ssl_certificate_key  /certs/privkey.pem;

    # Content security headers for Laravel
    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Content-Type-Options "nosniff";
...

If I change settings related to ssl such as the port in the docker-compose-yml and the listen and ssl certificates in the nginx.conf, the port doesn’t get exposed in docker and when i visit my url I get “502 bad gateway”. If I keep all the settings to use http and just leave my local nginx conf as is, I can access linkace at https://linkace.mydomain.dev but all the links within the app use localhost:9002 instead of my domain so none of the assets are loaded correctly and the button don’t work as well.

If I keep the settings to use http in the linkace files I can access linkace using my VPS’s IP with the port 9002 and the app works correctly and all the assets load correctly. But I want to use https and my domain.

I have been playing around with the configuration settings for a couple days now and have not been able to get this setup successfully. What do I need to change to get this working with my url on ssl?

LinkAce setup (please complete the following information):

  • Version: 1.2.2
  • Installed via: Advanced Docker setup
  • OS: Ubuntu

Just noticed that there were additional directions on the docs page about using a proxy or load balancer and to set the X-Forwarded-Proto header so I added that to my local installed nginx conf so that it now looks like this:

server {
        listen 443;

        server_name linkace.mydomain.dev;

        ssl_certificate /etc/letsencrypt/live/mydomain.dev/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/mydomain.dev/privkey.pem;

        add_header X-Forwarded-Proto "http";

        location / {
                proxy_pass http://localhost:9002;
        }
}

Other than that, the three files from the linkace repo were changed back to use the default http settings and I changed the port for nginx in the docker-compose.yml to run on port 9002 instead. This allows me to connect to linkace over my domain at https://linkace.mydomain.dev/ but again has the issue where urls and assets are trying to connect to localhost instead of my actual domain so nothing is loading correctly. I tried setting the APP_URL in .env to https://linkace.mydomain.dev instead of its default http://localhost but this did not change anything.

I was able to get it working. So now I can visit https://linkace.mydomain.dev/ and the app works as expected and all the assets load in just fine. This is the nginx conf I have:

server {
        listen 443;

        server_name linkace.mydomain.dev;

        ssl_certificate /etc/letsencrypt/live/mydomain.dev/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/mydomain.dev/privkey.pem;

        location / {
                proxy_set_header Host $http_host;
                proxy_set_header X-Forwarded-Proto https;
                proxy_pass http://localhost:9002;
        }
}

I added in the two proxy_set_header lines and now its all working.

Only issue I have now is when I add a new link, it takes about five seconds then gives me the message "The Link was added but a connection error occured when trying to access the URL. Details can be found in the logs. "

1 Like

Glad you solved this problem. :+1:
Indeed, LinkAce needs

proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;

to run properly behind a nginx proxy. Will update the documentation with a hint for this.


About the link adding issue: can you share the URL you want to add? Or does this happen with all links? If there are connection issues, you should find those in the log files, which are available under the System Logs menu point. Should look like this:

[2021-03-05 23:15:48] testing.WARNING: http://192.168.0.123:54623: cURL error 7: Failed to connect to 192.168.0.123 port 54623: Connection refused  

Yes this happens with all links. Heres the logs after adding a couple links, one with the quick add feature and one with the manual form.

warning production 2021-03-05 23:25:11 https://github.com/: cURL error 28: Connection timed out after 5000 milliseconds (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://github.com/

warning production 2021-03-05 23:24:27 https://www.newegg.com/product-shuffle: cURL error 28: Connection timed out after 5000 milliseconds (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://www.newegg.com/product-shuffle

If this happens with all links it could mean that outgoing HTTP connections from your server, or from your Docker containers, are blocked. Did you set up iptables or UFW?

Could you try to run curl -vv https://google.comdirectly from your server and one time from the inside of a Docker Container?

I believe that I disabled allowing docker to manage iptables so that I could exclusively manage ports with ufw since I was having issues before with them overwriting each other.

The curl command from the server works without any error but from within the docker container gives Immediate connect fail and Cannot assign requested address.

I would assume that Docker containers are not allowed to connect to websites outside the container. I am not sure how the networking with Docker and iptables works under the hood, but it sounds like this could be the problem.

Could you check this and report back then?

Yeah ill dig through and see what I can find. From what I have seen so far, some have suggested to add a rule for dockers network interface docker0 to ufw but that has not worked for me.

I was able to find a solution. Basically I had to completely disable UFW and revert all the docker iptables settings back to their default. From there I set up a firewall within digital ocean that points to my VPS with the following rules:

With this I can only access my services through their respective domains I configured with nginx, and linkace is able to make outgoing requests and I can save links with no issues!

Here is a stackoverflow thread about the issues with using docker and UFW together. The first answer states how using "iptables": false in /etc/docker/daemon.json will by default block all outgoing connections which is what I was using. It then states additional steps to make the functionality work but it did not work for me along with any of the other answers until someone suggested completely removing UFW and using digital oceans firewall which worked for me.

Here is the stackoverflow thread which led me to use those docker daemon settings in the first place.

1 Like

Great! Glad you found a solution and documented the steps. :+1: