Linux | Andy Ibanez https://www.andyibanez.com Teaching you cool stuff since 2012. Mon, 19 Aug 2019 22:08:54 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.10 39653125 rclone and Encryption Tutorial https://www.andyibanez.com/rclone-encryption-tutorial/ https://www.andyibanez.com/rclone-encryption-tutorial/#comments Sat, 11 Feb 2017 19:52:00 +0000 https://www.andyibanez.com/?p=552 rclone is a command line tool, similar to rsync, with the difference that it can sync, move, copy, and in general do other file operations on cloud services. You can use rclone to create backups of your servers or personal computers or to store your files in the cloud, optionally adding encryption to keep them […]

The post rclone and Encryption Tutorial appeared first on Andy Ibanez.

]]>
rclone is a command line tool, similar to rsync, with the difference that it can sync, move, copy, and in general do other file operations on cloud services. You can use rclone to create backups of your servers or personal computers or to store your files in the cloud, optionally adding encryption to keep them safe from prying eyes, which is what this post is about.

rclone supports the most popular cloud storage services including but not limited to Dropbox, Amazon Cloud Drive, Google Drive, and OneDrive. You can get unlimited Amazon Cloud Storage for $60 a year (update – this is no longer true for US-based accounts and it now costs $us 60 per TB per year), and unlimited Google Drive Storage if you pay for a G Suite account (around $10 for a domain per year) for $10 per month. G Suite lists unlimited storage starting when you have 5 users or more, but the guys at reddit.com/r/datahoarder state that, while the limit is there, Google doesn’t check for your usage so you can have unlimited storage as well (you do get “official” unlimited storage once you have 5 people in your G Suite account). I cannot verify the validity of this claim currently (Update – It’s real, guys).

Getting Started With rclone

Using the command line can be daunting, but it doesn’t have to be. rclone’s commands are nicely structured and are very easy to do. The initial configuration is straight forward and it doesn’t take too long.

Installation

You can skip this section if you have experience with rclone.

The Installation Page has instructions to install on your favorite Operating System. rclone runs in Linux, macOS, and Windows.

If you use macOS, you can install it with a single command using Homebrew:

brew install rclone

If you don’t use Homebrew, refer to the link above to install rclone without it.

Configuring and Using a Normal rclone Remote

You can skip this section if you already have experience with rclone, if you know how to configure a remote, and if you know how to use the basic rclone commands (copy, move, etc).

Before you can use an encrypted remote, you need to understand how to use normal ones.

What is a remote, anyway?

I have been saying this word a lot, so it might not be obvious what it means. rclone’s base concept is the remote, and it’s simply a cloud service that you have configured to use with rclone. For example, if you configure your Dropbox account, then it becomes a remote for rclone. The same with your Google account – Configure your Google account, and it will be a remote for rclone.

rclone config

rclone config is the command you use for configuration purposes, including creating remotes. In this example, I will configure a standard Gmail account. Go ahead and type this in your terminal:

rclone config

If you already have some remotes, they will be listed at the top. The output of the command will be something like this:

Name                 Type
====                 ====
AmazonCloudDrive  amazon cloud drive
EncryptedACD      crypt

e) Edit existing remote
n) New remote
d) Delete remote
s) Set configuration password
q) Quit config
e/n/d/s/q>

Naturally, you won’t see any remotes at the top if you don’t have any yet. Go ahead and press n to create a new remote, followed by enter. The first thing it will do is to prompt you for a name. The name of the remote can be anything you recognize. Should not contain spaces. I will name mine aibanezDrive.

Following the name, you will be prompted for a Storage. I am configuring Google Drive, so I will choose option 7.

Many providers will prompt you for a client_id and a client_secret. You don’t need these most of the time, and you should only use them if you created a custom app for your remotes. In my experience, I haven’t had the need to do that. You can press enter without writing anything at both prompts.

Later you will be asked if you want to use auto config. You will say yes to this if you are not running a headless system (no GUI). By choosing this option, rclone will launch a browser window prompting you to log in to your account (in this case, Gmail). You will see the standard login flow for your provider. After you are done with the credentials and allowing rclone to use your storage account, you will see a screen saying “Success!” and it will ask you to look back at your rclone terminal window.

Success

You will see something like this:

[aibanezDrive]
client_id = 
client_secret = 
token = {...}

Press y to save the remote. Then press q and enter to exit the configuration prompt.

To configure that your remote was saved successfully, you can run rclone listremotes. This will show you all your remotes, with the name you gave them. On my system, this prints:

AmazonCloudDrive
EncryptedACD
aibanezDrive

Trying out our rclone remote

Now that we have created a remote, we need to try it out! In this example we will create a list of files and directories and upload them to our Google Drive account using rclone. I need to remind you that we are currently playing with a non encrypted remote, so don’t choose any files you wouldn’t store unencrypted in the cloud. We will create the encrypted remote at the end of this tutorial.

For this part, I will go to ~Desktop and create a directory called Cards there. I will create a plain file and two directories which contain more plain files there. Essentially, I will be creating this:

Cards
 - Sakura
   - The Mirror
   - The Fly
   - The Sword
   - The Arrow
 - Shaoran
   - The Time
   - The Storm
   - The Return
   - The Freeze
 - None.txt

Following the convention of using Card Captor Sakura examples, I will create two folders for both main characters along with a few cards they caught in the show, and a None.txt file that would contain a list of cards none of them caught.

If you want to do the same, just run these commands below to create the same folder structure:

mkdir ~/Desktop/Cards && cd ~/Desktop/Cards
mkdir Sakura Shaoran
touch Sakura/The\ Mirror
touch Sakura/The\ Fly
touch Sakura/The\ Sword
touch Sakura/The\ Return
touch Shaoran/The\ Time
touch Shaoran/The\ Storm
touch Shaoran/The\ Return
touch Shaoran/The\ Freeze
touch None.txt

Copying the local directory to Google Drive

The command to copy files and directories is very straight forward.

Copy to local directory
rclone copy LOCAL_DIRECTORY_OR_FILE REMOTE_DIRECTORY

Note that you can copy both entire directories or just files within them.

The remote path starts with REMOTE_NAME:. Since in this example we are copying to the aibanezDrive remote, you would write aibanezDrive:, followed by the full path to copy to (starting with a /).

Copy from the remote directory

Just like you can copy files to your remote, you can also copy from the remote to your local computer. Otherwise it would be quite useless!

rclone copy REMOTE_DIRECTORY_OR_FILE LOCAL_DIRECTORY

rclone Copying Example

If you are using the same folder as me, you can copy and paste this command to see how it works. If not, you will have to modify REMOTE_DIRECTORY_OR_FILE and LOCAL_DIRECTORY as needed.

So to copy our Cards directory to /Cards, you would write this:

rclone copy ~/Desktop/Cards aibanezDrive:/Cards

After the operation finishes, you will see a small report on the operation:

Transferred:      0 Bytes (0 Bytes/s)
Errors:                 0
Checks:                 0
Transferred:            9
Elapsed time:       28.9s

(Please disregard the Elapsed Time for a few files. It is known the internet in my country sucks and there’s no really no way to get a better upload than 1mbps).

You can now go to your Google Drive account using your web browser. You will see the Cards directory at the root of your account.

Cards

Going inside that directory you will see the other two folders and the None.txt file.

Inside Cards

Finally going inside either the Sakura or Shaoran directories you can see their respective files as well.

Sakura

Now, if you want to copy something From your Google Drive account to your local computer, the steps are just as easy, with the only change being that the order of the local and remote folders are switched in the command.

Go ahead and upload any file to your Google Drive account. I uploaded a file called “IMG_1434.JPG” and I uploaded it to my root. If I wanted to copy this file from my Drive account to my Local computer, inside the ~/Desktop/Cards directory using rclone, I’d use this command:

rclone copy aibanezDrive:/IMG_1434.JPG ~/Desktop/Cards

And that’s it! Remember you can use the copy command for both directories and files.

Other Commands

rclone Supports a whole lot of commands as listed in their Documentation. Some useful ones are sync (it keeps a remote directory in sync with a local one), move, delete (remove the contents of a path), purge (remove a path and all its contents), ls (List all the objects in the specified path), lsd (list all the directories under the path), mkdir, and rmdir.

There is also a whole lot of interesting flags you can use with your commands, like --bwlimit to limit the amount of bandwidth rclone will use and --transfers to limit the amount of files that get transferred in batch, and others.

Other Notes

It’s important to note that some commands may not work or may behave differently based on what kind of remote you are using. The Storage Systems Overview page has a helpful table and a few notes for how commands may behave. There’s also a page for every supported remote that lists their quirks, features, and possible different behaviors for common commands.

Configuring and Using an Encrypted Remote.

We are finally at the main point of this post. Hooray!

Due to the nature of cloud storage, you may not want to store your files in a plain format, because that implies that they are storied unsafely in some other guy’s computer and they might be visible to someone else. If someone breaks into your cloud storage account, they can see your files. That wouldn’t be pretty.

At the same time, there’s some files that may be fine to store unencrypted. Maybe a list of groceries you need to buy, or other kind of files that there is no issue if other people see them or you just feel save storing them as is.

rclone supports the use of encryption, and it can be used in such a way that a remote can hold both encrypted and non-encrypted files.

The crypt Remote.

To use encryption, you create a crypt remote. This is an special kind of remote. If you configure any other kind of remote, you are creating a direct connection between your computer and another remote, and that’s what makes crypt different. Unlike the other remotes, crypt is placed on top of an existing remote in order to do its job, and is not a direct connection like the others.

The remote we created in this tutorial, aibanezDrive was a direct remote between your computer and Google Drive. We will now create a crypt remote that uses this remote as an underlying requirement for the encrypted one. So the crypt remote will encrypt the files, pass them over to your standard remote, and this one will end the encrypted files to the cloud.

Creating a crypt Remote.

Before we create an account, delete the /Cards folder from your Google Drive (if you created it), as we will be using this same directory to show how crypt works.

With that out of the way, we need to run rclone config again. Press n to create a new remote.

The one thing I don’t like about rclone is that, when listing your remotes, you can’t see what underlying remote a crypt one is using, so when prompted for the name, I recommend giving it a name that helps you identify both the crypt and the underlying remote it’s using. I will be naming it aibanezDrive_Crypt.

After the name, enter 5 to create a crypt remote.

This is where the configuration takes a weird turn, unlike the others. When prompted for a remote, you specify an existing remote, along with its path. The path you will choose will be the root of the remote.

Suppose you choose the root of your remote to be /Archive/Encrypted. This will cause the crypt remote to store all the encrypted files there, and also, when you refer to the root of the remote – say, aibanezDrive_Crypt:/ -, you will be referring to the whole /Archive/Encrypted directory in your storage of choice. In other words, your remote won’t be able to see anything outside this path you specified. If you are a UNIX user, you can think that your remote is chrooted to /Archive/Encrypted.

In this example, I want to archive encrypted files in aibanezDrive:/Archive/Encrypted. Note that we are not referring to the remote we are creating itself, but rather the underlying one.

Next you will be prompted if you want to encrypt the file names. The way I see it, yes is the only right choice here but you might not mind it if the file names are visible. Choosing yes has some complications all listed in the crypt section of the documentation. There’s issues with long file names and paths. But, in general, if your file names are below 156 characters, you should be fine on all providers, granted that some provides may not have this issue. Please refer to the documentation to see if you can find any info on filename length.

Choose 2.

Next you will be asked to create a password or to have one generated for you. Both options are strong and it depends on how much you trust the RNG of your system.

I will be writing my own.

Next you will be asked for the salt. In cryptography and security, a salt modifies a string so it’s hash is entirely different. The salt is what makes your password cat to be stored as entirely different strings in databases that store passwords.

You have three options this time: To provide one, have one generated for you, or to not use a hash at all (not recommended). Keep in mind that whether you provide one or have one generated for you, you will have to store in a safe place, as rclone will ask you for both when interacting with a crypt remote. Not providing the salt when required will result in you not having access to these files later.

I will be providing my own hash.

After providing the salt you will see the newly created remote:

[aibanezDrive_Crypt]
remote = aibanezDrive:/Archive/Encrypted
filename_encryption = on
password = *** ENCRYPTED ***
password2 = *** ENCRYPTED ***

Press y at the prompt and then q.

You now have your newly created remote and you can use it exactly the same you would use any other remote. For this example, I will be running exactly the same example as the one I used to demonstrate how copying works with a normal Google Drive remote, except I will just change the remote names and paths.

rclone copy ~/Desktop/Cards aibanezDrive_Crypt:/

After the script is done, you can verify if they exist in your Drive account, but you won’t be able to see their contents (or filenames) at all! Remember that /Archive/Encrypted is the root of the remote, so you your Google Drive will have this same path where you will see the encrypted files.

Crypt Results

Note that, unlike a normal remote, you cannot just add files and restore them. They will not get magically encrypted and you will just add normal files without encryption if you uploaded them directly. So if you want to encrypt the file names, you can only do it with your crypt remote with rclone

And since they are encrypted, the only way to download them is with rclone itself as well. So normally, you would want to list the files that are available in the controller in order to do that. You can do that with the ls and lsd commands:

rclone ls aibanezDrive_Crypt:/
Andys-iMac:Cards andyibanez$ rclone ls aibanezDrive_Crypt:/
   358768 IMG_1434.JPG
        0 None.txt
        0 Shaoran/The Freeze
        0 Shaoran/The Return
        0 Shaoran/The Storm
        0 Shaoran/The Time
        0 Sakura/The Sword
        0 Sakura/The Mirror
        0 Sakura/The Return
        0 Sakura/The Fly

Note that rclone really tries to list everything when just using the ls command. You an list the directories only using lsd (this would not print IMG_1434.JPG and None.txt in this case), or if you just want to view the files (not directories) at the top level, you can run the ls command with the --max-depth=1 flag:

Andys-iMac:Cards andyibanez$ rclone ls --max-depth=1 aibanezDrive_Crypt:/
   358768 IMG_1434.JPG
        0 None.txt

Now suppose I want to download the Sakura folder to my computer:

rclone copy aibanezDrive_Crypt:/Sakura ~/Desktop/restored

This will download the contents from Sakura into ~/Desktop/restored.

Conclusion

rclone is a fantastic tool for creating backups and for using cloud storage. If you want to use rsync but would rather store your files someplace else, rclone is the perfect tool to do it. It supports many cloud providers, it’s open source, it’s under active development, and it supports cryptography out of the box.

The post rclone and Encryption Tutorial appeared first on Andy Ibanez.

]]>
https://www.andyibanez.com/rclone-encryption-tutorial/feed/ 9 552
Installing Ghost In an Ubuntu Server with Virtualmin and Apache https://www.andyibanez.com/ghost-ubuntu-virtualmin-apache/ Wed, 20 Aug 2014 22:30:27 +0000 http://www.andyibanez.com/?p=325 Learn to install Ghost in a VPS you manage with Virtualmin and Apache.

The post Installing Ghost In an Ubuntu Server with Virtualmin and Apache appeared first on Andy Ibanez.

]]>
Ghost

I was arguing with myself whether I should post this or not. For one, it is the second Linux-related tutorial I write, and that strays from the main topic of my site (my awesome self and iOS development). But because installing Ghost was a big hassle on my current setup, I decided to write this for those who are struggling with the same thing as me. After all, I have said this before – one of the reasons I decided to start blogging was so I could also start documenting things for myself. Most, if not all, of my tutorials, have been things I have written shortly after having learned about them. So I know that my own site is a good go-to reference when I need to use those topics again.

At the time of this writing, Ghost 0.5 has just been released to the public. There are many changes, and there aren’t too many sources you can find to help you install Ghost on Ubuntu, using Virtualmin (Webmin, in case you only have access to that – this tutorial should work for both). In fact, in all my searches, the total amount of resources I found that talked about Ghost and Virtualmin or Webmin in the same page was a grand total of zero.

For this reason, I will document how I managed to install Ghost on my server, which is running Ubuntu server with a Virtualmin setup. If you are like me, you just hate to do server administration by hand using the Terminal and sending off the commands, and installed Virtualmin to make your life easier.

Most Ghost related tutorials use NGinx. But personally I am an Apache guy, so for me it’s important to make this work in Apache.

In short, this tutorial assumes that you have access to Virtualmin (or Webmin), and are using Apache.

Finally, this is not a tutorial for beginners. Even when using Webmin, you need to know the command line, and know your way around virtual sites.

With all that said, I will try to make this as easy as possible.

1. Preparing Your Web Server For Ghost.

The first thing you will probably want to do is to set up a subdomain using Virtualmin for your Ghost Blog. If you are using a shared host that has Webmin and can’t do that, don’t worry, you can set Ghost up in your main domain as well.

(Virtualmin only) Go to Virtualmin, click Create Virtual Server. At this point, the settings of the server are irrelevant so just configure it as you need it. The Ghost blog I created is located at http://linguist.andyibanez.com

Grab your username and password for FTP and SSH purposes.

2. Installed The Packages Required for Ghost

After you have your site created, go to Webmin > System > Software Packages.

2.1 Build Essentials and NPM

Select the radio button that says “Package from APT”. Then click the button that says “Search Apt…”. Type in build-essentials, select the only package that shows up, and install it. This package is needed because it includes a few useful development tools, including g++, which is needed to compile the other tools needed for Ghost.

APT

Do the exact same thign with NPM, writing “npm” instead of build-essentials.

2.2 node.js

Sadly, I lied. Sorta. We will need to access our site SSH after all to install the remaining component to install Ghost. I, too, thought it was a bummer when I realised I had to do that.

Technically, you can install node.js installed build-essentials and PM, but at the time of this writing, the APT packages for node.js are outdated. Ghost needs nose.js 0.10.x to work, and at the time of this writing it installs 0.6.x. So we have no other option but to download them and compile it from source.

Virtualmin Users

If you have access to Virtualmin because you have full access over your VPS, follow these instructions.

1. Login to your root account via SSH in the terminal (this is NOT the username and password you created above – you should have gotten this when you created your VPS).
2. Once logged in, type in the following commands *:

cd /usr/src
wget http:/nodejs.org/dist/v0.10.18/node-v0.10.18.tar.gz
tar zxf node-v0.10.18.tar.gz
cd node-v0.10.18
./configure
make
make install

The moment you send the make command, your server will be doing a big task. It can take a while to complete (in my case, it took 8 – 10 minutes).

Once that is done, verify node.js is installed by typing this command:

node -v

It should print the node.js version you’re running.

If you are using a shared hosting and don’t have access to Virtualmin but only to Webmin, you may need to ask the manage of the server to install that version of node.js, npm, and build-settings.

3. Installing Ghost

Once all data is done, everything we have left is to install Ghost itself. Go to http://ghost.org/download and download the package to your computer. Next, grab the SSH details we obtained in the first step.

Once Ghost is downloaded, open your favorite FTP program and login with the credentials. Navigate to the public_html folder, unzip the Ghost package, and upload it (you can also upload the zip as it is, or even download it directly there with wget and unzip with more command line commands).

Once that is uploaded via FTP, we need to log in via SSH again, but this time using the credentials from the first step instead of root. Do that and navigate to the place where you just uploaded the Ghost files. Once you are there, run the following command to install Ghost:

npm install --production

And watch out for any possible errors. If we are doing everything correctly, everything you will see at most are a few warnings.

4. Configuring Ghost

We are nearing the end of this process.

Ghost has a file called “config.js”. If you don’t see it, you may see the example “config.example.js” file. You can just copy the latter one and rename it to config.js. Open it with your favorite editor (I do all my FTP operations with Cyberduck, so I can download the files locally and open them with TextMate – But this is just personal preference.

The config file is here, you can also just steal this one.

// # Ghost Configuration
// Setup your Ghost install for various environments
// Documentation can be found at http://support.ghost.org/config/

var path = require('path'),
    config;

config = {
    // ### Development **(default)**
    development: {
        // The url to use when providing links to the site, E.g. in RSS and email.
        url: 'http://my-ghost-blog.com',

        // Example mail config
        // Visit http://support.ghost.org/mail for instructions
        // ```
        //  mail: {
        //      transport: 'SMTP',
        //      options: {
        //          service: 'Mailgun',
        //          auth: {
        //              user: '', // mailgun username
        //              pass: ''  // mailgun password
        //          }
        //      }
        //  },
        // ```

        database: {
            client: 'sqlite3',
            connection: {
                filename: path.join(__dirname, '/content/data/ghost-dev.db')
            },
            debug: false
        },
        server: {
            // Host to be passed to node's `net.Server#listen()`
            host: '127.0.0.1',
            // Port to be passed to node's `net.Server#listen()`, for iisnode set this to `process.env.PORT`
            port: '2368'
        },
        paths: {
            contentPath: path.join(__dirname, '/content/')
        }
    },

    // ### Production
    // When running Ghost in the wild, use the production environment
    // Configure your URL and mail settings here
    production: {
        url: 'http://my-ghost-blog.com',
        mail: {},
        database: {
            client: 'sqlite3',
            connection: {
                filename: path.join(__dirname, '/content/data/ghost.db')
            },
            debug: false
        },
        server: {
            // Host to be passed to node's `net.Server#listen()`
            host: '127.0.0.1',
            // Port to be passed to node's `net.Server#listen()`, for iisnode set this to `process.env.PORT`
            port: '2368'
        }
    },

    // **Developers only need to edit below here**

    // ### Testing
    // Used when developing Ghost to run tests and check the health of Ghost
    // Uses a different port number
    testing: {
        url: 'http://127.0.0.1:2369',
        database: {
            client: 'sqlite3',
            connection: {
                filename: path.join(__dirname, '/content/data/ghost-test.db')
            }
        },
        server: {
            host: '127.0.0.1',
            port: '2369'
        },
        logging: false
    },

    // ### Testing MySQL
    // Used by Travis - Automated testing run through GitHub
    'testing-mysql': {
        url: 'http://127.0.0.1:2369',
        database: {
            client: 'mysql',
            connection: {
                host     : '127.0.0.1',
                user     : 'root',
                password : '',
                database : 'ghost_testing',
                charset  : 'utf8'
            }
        },
        server: {
            host: '127.0.0.1',
            port: '2369'
        },
        logging: false
    },

    // ### Testing pg
    // Used by Travis - Automated testing run through GitHub
    'testing-pg': {
        url: 'http://127.0.0.1:2369',
        database: {
            client: 'pg',
            connection: {
                host     : '127.0.0.1',
                user     : 'postgres',
                password : '',
                database : 'ghost_testing',
                charset  : 'utf8'
            }
        },
        server: {
            host: '127.0.0.1',
            port: '2369'
        },
        logging: false
    }
};

// Export config
module.exports = config;

Search for the line that says “// ### Production” near line 47 and do a few edits.

  • Edit url to be the URL that will be used to visit your blog.
  • (Optionally) change the port number (only do this if you’re installing a second instance of Ghost).

Finally, go back to Webmin, select your server, and go to Configure Website > Edit Directies. You will be greeted with a bunch of directives.

Paste in these three lines, right before the first “Directory” directive:

ProxyRequests off
ProxyPass / http://127.0.0.1:2368/
ProxyPassReverse / http:/127.0.0.1:2368/

If you are installing a second instance of Ghost, you would need to change the port number to match the port of the instance.

Now you need to start the service (Ghost runs as a service instead of an actual website).

npm start --production > output.log &

Please note you will need to do this again if you restart your server. There are ways to keep Ghost “Alive forever”, but none of them have worked for me so I am omitting them from this tutorial.

That’s a Wrap

You can now visit the URL of your Ghost website. visit it’s /ghost page to create the first user and start blogging!

You can install various instances of Ghost with this method. Simply change the port number in both the configuration and proxy settings when you do.

Enjoy!

The post Installing Ghost In an Ubuntu Server with Virtualmin and Apache appeared first on Andy Ibanez.

]]>
325
Making A Linux File Server That Interacts With OS X Mavericks. https://www.andyibanez.com/making-linux-file-server-interacts-os-x-mavericks/ https://www.andyibanez.com/making-linux-file-server-interacts-os-x-mavericks/#comments Mon, 28 Oct 2013 22:55:30 +0000 http://andyibanez.com/?p=130 Learn to build a File Server that interacts with OS X Mavericks. Mavericks will talk to the File Server via SMB.

The post Making A Linux File Server That Interacts With OS X Mavericks. appeared first on Andy Ibanez.

]]>
Making A Linux File Server That Interacts With OS X Mavericks.

This is a non-development tutorial, but with Mavericks being really new there are no tutorials to make your file server interact with the OS smoothly. Because of this, I’m writing this small tutorial to setup your server to share files with OS X Mavericks. This may or may not work with OS X Lion and OS X Mountain Lion, but you lose nothing trying, right?

Last weekend I decided to ditch my Windows Desktop altogether and to make my beautiful 27” Inch iMac my main computer. The reason I stuck with Windows for so long is that I just cannot stash thousands of Gigabytes in my iMac. In other words, storage was the only reason I was using Windows. But then I thought, wouldn’t it be great to kill my Windows setup altogether, and convert all this hardware into a file server? I said yes, and that’s exactly what I did.

I’m incredibly illiterate when it comes to Linux and file servers. This was my first time setting a file server, actually. I was going to use FreeNAS. I looked at many OS alternatives for this, but I decided to stick with Ubuntu. If you’re in a similar situation, follow along this tutorial and hopefully by the end of it you will have a working Ubuntu file server that can communicate painlessly with your Mac.

1. Things to Consider: Your File Server Won’t use AFP.

If you don’t have much of a technical background and just want to have your file server up and running, feel free to skip this paragraph.

It will use SMB (yes, Windows’ – ironic in my case, isn’t it?). Why? Apple has the nasty habit of killing many of their technologies in the background with no formal announcement, but this was spotted in some technical documents. Basically, Apple will shift from AFP file sharing to SMB2 in OS X 10.9 Mavericks. It may come as a surprise to some but many people have had problems with AFP in Mountain Lion and below. Apple is dropping support for their own protocol in favor of Windows’ because it is more secure (or so they say).

There were many problems I had when I was first setting my file server up. I actually had no idea what this AFP protocol was, much less that it was being deprecated. When I first set the file server up, I used this tutorial to make Ubuntu announce and interact with OS X via AFP. I had no problems following that tutorial, but at the time of testing the problems were obvious. I was able to connect to my file server exactly once – then I restarted the server, and OS X Mavericks refused to connect anymore, burping this error message instead:

“The version of the server you’re trying to connect to is not supported. Please contact your system administrator to solve this problem.”

Googling tirelessly for hours I never found a solution to get that setup to work. I learned AFP was being dropped, and no one ever suggested a working around (there were a few workarounds, but none of them worked for me). So I just started over with a new Ubuntu Installation, and luckily, I succeeded putting my file server up.

Requirements for the File Server

I will not talk about the hardware requirements here. Personally I transformed an old computer into a file server. I assume you already have some old hardware lying around that you can use. But in case you’re curious, this is my server’s setup:

  • 5 internal 1TB hard drives.
  • 12 GBs RAM
  • VGA 1680 x 1020 Monitor

You can probably tell I won’t guide you through installing SSH to remotely control the server, as personally I control the server physically since it is next to my main computer anyway.

As for the software requirements:

  • Ubuntu Server. It probably works with the “normal” Ubuntu Desktop, but this is what I use. At the time of this writing, the latest version is Ubuntu Server 13.10, and you can get it here.
  • Samba. We need this so our server can be talked to with SMB. I will guide you through this.
  • Avahi (Optional). This isn’t needed but it does something really cool – announce the server via Bonjour. If you want to install this, I will guide you through.

1.1 So no Netatalk, then?

If you’re one of the poor souls who couldn’t get the Ubuntu File Server work properly with OS X Mavericks or (Mountain) Lion, you’re probably asking yourself why we aren’t using Netatalk.

Like I said at the beginning of this tutorial, we will be using SMB instead of AFP. Netatalk provided the required tools to talk to your server via AFP. Since Apple will most likely get rid of AFP completely at some time, we don’t need to use it at all.

2. Preparing the File Server

Installing and Configuring Ubuntu Server

How to install the server from scratch is out of this tutorial. I will focus mostly in Samba to make sure your Ubuntu can actually talk to your OS X. So go ahead and install it, as barebones as you can/want. I didn’t do any different than clicking “next” until the installation started.

Just make sure you use a different computer and account name. There seems to be some sort of conflict when OS X has the same account name as the file server.

Additionally, you will need to give Ubuntu an static IP address or use a .local name. Personally I gave the Ubuntu machine an static IP. Much easier to work with with my home network.

Installing and Configuring Samba: A Very Important Piece of the File Server

This is the most monotonous and longest part, but it isn’t so bad. Make sure you’re ready to spend 15 minutes of your time. To get this right I actually had to follow two different tutorials, and I have compiled them into one for you:

Step 1: Open Up The Terminal

I hate silly steps like this but let’s put them in for the sake of completeness. Open up a terminal window. We’re going to do everything as root to save ourselves the constant “sudo”. Type in:

sudo su

We’re going to be editing a few files later on. I like to use Linux’s included gedit to edit said files. Others may prefer, to use Nano, Vi, Vim, etc.. In case you’re like me and prefer a nice GUI, please install gksudo. This is irrelevant to the tutorial, but you shouldn’t execute GUI programs via the command line with sudo, even when you do need superuser permissions. Instead, to execute GUI programs via the command line with sudo permissions, execute them with gksu (or gksudo) instead. To install gksudo:

apt-get install gksu

To use it when you aren’t using the command line as root:

gksu gedit somefile

Or:

gksudo gedit somefile

When you want to share new directories via Samba later on, you will need to edit a file, and if you like gedit, you better use this to launch it instead of sudo from now on. So maybe it is a very little relevant after all!

Step 2: Install and Configure Samba for the File Server

With your shiny super user permissions, it’s time to install Samba:

apt-get install libcups2 samba samba-common

Once it’s done installing, we have to do a couple of configurations. We need to edit

/etc/samba/smb.conf

now:

gedit /etc/samba/smb.conf

This will launch gedit with Samba’s configuration file. Press Ctrl + F and type in “security =” so you’re taken to this area:

[...]
# "security = user" is always a good idea. This will require a Unix account
# in this server for every user accessing the server. See
# /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
# in the samba-doc package for details.
#  security = user
[...]

Remove the comment (the pound sign) from the last line, and add

username map = /etc/samba/smbusers

below it, so you have:

[...]
# "security = user" is always a good idea. This will require a Unix account
# in this server for every user accessing the server. See
# /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
# in the samba-doc package for details.
  security = user
  username map = /etc/samba/smbusers
[...]

Save (Ctrl + S) the file and quit gedit so you have control over the terminal window with su.

Restart Samba when done:

service smbd restart

Step 3: Creating Samba Users

Samba users are needed to interact with the file server.

The steps to create Samba users are simple as 1,2, which I will guide you through:

  1. Create a password for the user you want to create.
  2. Add that user to to the Samba users file.

To create the password for the Samba user:

smbpasswd -a <username>

Where username is the name of the new Samba account you want to create. For example:

smbpasswd -a Andy

Then open the

/etc/samba/smbusers

file:

gedit /etc/samba/smbusers

And add the username like this:

<username> = “<username>”

To the left you put the name of the Ubuntu account that will have to this Samba account, and to the right, the name of the Samba account it has access to.

So if my Ubuntu account is called “andyfileserver” and the Samba account is called Andy, it would be like this:

andyfileserver = "Andy"

Save the file and restart Samba once again.

service smbd restart

And that’s it! You can now connect to your file server with your OS X Mavericks by going to Finder > Go > Connect to Server…

Sharing Folders

This is really out of the scope of the tutorial. There are many configurations for shared folders, but to get you started, you add some entries to your

/etc/samba/smb.conf

file. Open it and add your entries at the end of the file, like this:

[theroot]

comment = File Server
path = /media/andy
read only = no
writable = yes

[bluray]

comment = Optical Drive
path = /cdrom
read only = yes
writable = no

You essentially mount directories. What is in the first [] brackets is what your Mac will see as their name in the file server. I have “theroot” and “bluray”, so for me this looks like this:

Screen Shot 2013-10-28 at 6.49.00 PM

Again, there are many configurations so you may want to look how to configure Samba share folders for your needs. you can even assign different permissions and everything.

3. (Optional) File Server Announcing to the Network

You have finished the most important part and basically you can use your file server now, but this little extra step is cool in my opinion. Instead of having to do Finder > Go > Connect to Server… Every time we want to connect to the server, why don’t we show up in Finder’s left bar?

Screen Shot 2013-10-28 at 6.35.52 PM

To do that we need to install Avahi. Avahi is a simple file server daemon that announces to the network when it has connected, basically Bonjour. With this instead of telling OS X Mavericks to find the server, we can tell the server to tell everyone when it’s connected. In that way our server will appear as extractable media in the left bar of any finder Window, and it’s much easier and pleasant to interact with it using this method.

Luckily, installing and configuring Avahi is really fast and simple.

If you still have your su terminal window, type this in:

apt-get install avahi-daemon avahi-utils

Then open and edit the file

/etc/avahi/services/smb.service

(the file will be blank and that’s fine – you’re creating it from scratch):

gedit /etc/avahi/services/smb.service

And copy and paste this into it:

<?xml version="1.0" standalone='no'?>
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
 <name replace-wildcards="yes">%h</name>
 <service>
   <type>_smb._tcp</type>
   <port>445</port>
 </service>
 <service>
   <type>_device-info._tcp</type>
   <port>0</port>
   <txt-record>model=RackMac</txt-record>
 </service>
</service-group>

And that’s it! You don’t even need to restart Avahi. It will automatically announce to the network and your Mac will see your file server in Finder to the left.

And that’s it! I hope you find this tutorial useful, as personally I had a hard time configuring my file server to talk properly with OS X Mavericks.

Linklist:

These are the original tutorials I used:

http://www.howtoforge.com/ubuntu-13.04-samba-standalone-server-with-tdbsam-backend
http://www.howtogeek.com/howto/ubuntu/install-samba-server-on-ubuntu/
http://www.macdweller.org/2012/05/13/samba-bonjour-with-avahi/

The post Making A Linux File Server That Interacts With OS X Mavericks. appeared first on Andy Ibanez.

]]>
https://www.andyibanez.com/making-linux-file-server-interacts-os-x-mavericks/feed/ 58 130