Merge pull request #65 from rtanaka/rclone_archive

add support for cloud based syncing (e.g. Google Drive) via rclone
This commit is contained in:
cimryan
2018-10-26 18:05:11 -07:00
committed by GitHub
9 changed files with 140 additions and 1 deletions

View File

@@ -62,6 +62,9 @@ Get the IP address of the archive machine. You'll need this later, so write it d
Since sftp/rsync is accessing a computer through SSH, the only requirement for hosting an SFTP/rsync server is to have a box running Linux. An example can be another Raspberry Pi connected to your local network with a USB storage drive plugged in. The official Raspberry Pi site has a good example on [how to mount an external drive](https://www.raspberrypi.org/documentation/configuration/external-storage.md). You will need the username and host/IP of the storage server, as well as the path for the files to go in, and the storage server will need to allow SSH.
### Hosting via rclone (for Google Drive, S3, DropBox, etc)
**EXPERIMENTAL** - This hasn't been thoroughly tested yet but you can follow these [instructions](/doc/SetupRClone.md) and give it a spin.
### ***TODO: Other hosting solutions***
## Set up the Raspberry Pi

85
doc/SetupRClone.md Normal file
View File

@@ -0,0 +1,85 @@
# Introduction
This guide will show you how to install and configure [rclone4pi](https://github.com/pageauc/rclone4pi/wiki) (based off [rclone](https://rclone.org/)) to archive your saved TeslaCam footage on a number of different remote storage services including (Google Drive, S3 and Dropbox).
This guide assumes you have **NOT** run the `setup-teslacam` script yet
# Step 1: Install rclone4pi
The first step is to get the [rclone4pi](https://github.com/pageauc/rclone4pi/wiki) binary installed on the raspberry pi. You can do this by executing the following command:
```
curl -L https://raw.github.com/pageauc/rclone4pi/master/rclone-install.sh | bash
```
You can also install the script manually by following these [instructions](https://github.com/pageauc/rclone4pi/wiki#manual-install).
Once installed, the script will install rclone-install.sh, rclone-sync.sh and create a subfolder rpi-sync in users home eg. /home/pi
# Step 2: Configure rclone storage system
Next, run this command as root to configure a storage system.
```
rclone config
```
**Important:** Run this as root since archiveloop runs as root and the rclone config is bound to the user running the config.
This will launch an interactive setup with a series of questions. I highly recommend you look at the documents for your storage system by going to [rclone](https://rclone.org/) and selecting your storage system from the pull down menu at the stop.
I've only personally tested this with Google Drive using these [instructions](https://rclone.org/drive/). One thing to note is the importance of setting the correct [scope](https://rclone.org/drive/#scopes) you are providing access to. Carefully read the documentation on [scopes on rclone](https://rclone.org/drive/#scopes) as well as [Google Drive](https://developers.google.com/drive/api/v3/about-auth). I recommend going with drive.file scope.
**Important:** Take note of the name you used for this config. You will need it later. The rest of the document will use `gdrive` as the name since that's what I used.
# Step 3: Verify and create storage directory
Run the following command (again, as root) to see the name of the remote drive you just created.
```
rclone listremotes
```
If you don't see the name there, something went wrong and you'll likely have to go back through the config process. If all went well, use
```
rclone lsd gdrive:
```
At this point, you should not see anything if you set your scope correctly. Now we need to create a folder to put all our archives in. You can do this by running this command. I used TeslaCam but you can name it whatever you want as long as you set it in the next step below.
```
rclone mkdir gdrive:TeslaCam
```
Run this one last command again
```
rclone lsd gdrive:
```
Once you confirm that the directoy you just created is there, we're all set to move on!
# Step 4: Exports
To be able to configure the teslausb pi to use rclone, you'll need to export a few things. On your teslausb pi, run:
```
export RCLONE_ENABLE=true
export RCLONE_DRIVE=<name of drive>
export RCLONE_PATH=<path to folder>
```
An example of my config is listed below:
```
export RCLONE_ENABLE=true
export RCLONE_DRIVE=gdrive
export RCLONE_PATH=TeslaCam
```
**Note:** `RCLONE_ENABLE=true` is going to disable the default archive server. It also will **not** play nicely with `RSYNC_ENABLE=true` Perhaps future releases will allow both to be defined and function at the same time, for redundancy, but for now just pick one that you'll want the most.
You should be ready to run the setup script now, so return back to step 8 of the [Main Instructions](/README.md).

View File

@@ -0,0 +1,20 @@
#!/bin/bash -eu
log "Moving clips to rclone archive..."
source /root/.teslaCamRcloneConfig
NUM_FILES_MOVED=0
for file_name in "$CAM_MOUNT"/TeslaCam/saved*; do
[ -e "$file_name" ] || continue
log "Moving $file_name ..."
rclone --config /root/.config/rclone/rclone.conf move "$file_name" "$drive:$path" >> "$LOG_FILE" 2>&1 || echo ""
log "Moved $file_name."
NUM_FILES_MOVED=$((NUM_FILES_MOVED + 1))
done
log "Moved $NUM_FILES_MOVED file(s)."
/root/bin/send-pushover "$NUM_FILES_MOVED"
log "Finished moving clips to rclone archive"

View File

@@ -0,0 +1,10 @@
#!/bin/bash -eu
function configure_archive () {
echo "Configuring the archive for Rclone..."
echo "drive=$RCLONE_DRIVE" > /root/.teslaCamRcloneConfig
echo "path=$RCLONE_PATH" >> /root/.teslaCamRcloneConfig
}
configure_archive

View File

@@ -0,0 +1,2 @@
#!/bin/bash -eu
# Nothing to do.

View File

@@ -0,0 +1,2 @@
#!/bin/bash -eu
# Nothing to do.

View File

@@ -0,0 +1 @@
#!/bin/bash -eu

View File

@@ -7,7 +7,7 @@ function log () {
echo "$1" >> "$LOG_FILE"
}
if [ -r "/root/.teslaCamPushoverCredentials" ] && [ $NUM_FILES_MOVED > 0 ]
if [ -r "/root/.teslaCamPushoverCredentials" ] && [ $NUM_FILES_MOVED -gt 0 ]
then
log "Sending Pushover message for moved files."

View File

@@ -118,6 +118,11 @@ function configure_archive_scripts () {
get_script /root/bin archive-clips.sh run/rsync_archive
get_script /root/bin connect-archive.sh run/rsync_archive
get_script /root/bin disconnect-archive.sh run/rsync_archive
elif [ $RCLONE_ENABLE = true ]
then
get_script /root/bin archive-clips.sh run/rclone_archive
get_script /root/bin connect-archive.sh run/rclone_archive
get_script /root/bin disconnect-archive.sh run/rclone_archive
else
get_script /root/bin archive-clips.sh run/cifs_archive
get_script /root/bin connect-archive.sh run/cifs_archive
@@ -180,6 +185,7 @@ function make_root_fs_readonly () {
echo "Verifying environment variables..."
RSYNC_ENABLE="${RSYNC_ENABLE:-false}"
RCLONE_ENABLE="${RCLONE_ENABLE:-false}"
if [ "$RSYNC_ENABLE" = true ]
then
@@ -187,6 +193,12 @@ then
check_variable "RSYNC_SERVER"
export archiveserver="$RSYNC_SERVER"
check_variable "RSYNC_PATH"
elif [ "$RCLONE_ENABLE" = true ]
then
check_variable "RCLONE_DRIVE"
check_variable "RCLONE_PATH"
# since it's a cloud hosted drive we'll just set this to google dns
export archiveserver="8.8.8.8"
else # Else for now, TODO allow both for more redundancy?
check_variable "sharename"
check_variable "shareuser"
@@ -209,6 +221,10 @@ if [ "$RSYNC_ENABLE" = true ]
then
get_script /root/bin verify-archive-configuration.sh run/rsync_archive
get_script /root/bin configure-archive.sh run/rsync_archive
elif [ "$RCLONE_ENABLE" = true ]
then
get_script /root/bin verify-archive-configuration.sh run/rclone_archive
get_script /root/bin configure-archive.sh run/rclone_archive
else
get_script /root/bin verify-archive-configuration.sh run/cifs_archive
get_script /root/bin configure-archive.sh run/cifs_archive