If you work with video on a regular basis, you are no doubt used to sending large media files to clients/teammates and backing up your media files to a storage unit of some kind thereafter. Chances are when you transfer and archive your media files, it’s a two-step process. Today, I want to demonstrate how you can automatically transfer and archive media files in a single process using a combination of MASV Portals and Backblaze as an example.

The main benefit of combining both of these actions, aside from speed/efficiency, is making sure no file is ever forgotten during the backup process. Everyone at some point has opened a folder to see files named file01.mp4, file02.mp, file01changed.mp4. A quick backup process that gets complicated incredibly quickly. However, using an automatic and easy backup process can separate each file by upload date and file size. This can and will save you time and effort in figuring out which version of a file is the important one.

Today, integrating MASV with cloud providers like Backblaze can be easily achieved on our web app with a few clicks. However, this post will take this a bit further by showing you how to implement these steps using MASV’s Cloud Connect API. This allows you to automate the process beyond a single cloud connection and create an automation powerhouse that connects multiple MASV Portals to multiple cloud providers/storage buckets. We’ll be using python3 as a scripting language to demonstrate how easily this can be achieved.

Transfer and Archive Media with MASV + Backblaze

As mentioned, we are going to be using MASV and Backblaze today. Our goal is to create a simple python3 script that will connect both services and can show how easily MASV’s Cloud Connect technology works. Both of these technologies offer a number of features. Here are some highlights of what we are using today.


  • Ultra fast large file transfer.

  • Automatic transfer of entire directories.

  • Cloud Connect: integration with external cloud buckets to automatically move the files where you need them.


  • Cheap unlimited long term storage.

  • Incredibly easy cloud backup.

  • Workflow friendly.

  • Supports two-factor authentication.

We are utilizing Backblaze as our Cloud Connect provider for integration with MASV. All of these steps can be generalized to work with any provider as outlined in our documentation.

Workflow Assumptions

Moving forward we are going to assume the following:

  • A code editor such as Visual studio code.

  • Python3 installed and the ability to run scripts.

  • A MASV account.

  • Backblaze B2 account with API key.

Before we get started we want to make sure that you have a Backblaze B2 key. Please follow the guide at https://www.backblaze.com/b2/docs/application_keys.html

Setting Up Requirements

Start by creating a file called backblaze.py this is the script that all code will be written in.

This script requires the ability to send http requests and to do so we are going to use the requests library.

In your terminal, install the requests package.

$ pip3 install requests

Authenticating with MASV

For our authentication we are going to use our username/password. This can also be done with a user token but for simplicity’s sake today we are using username/password. We highly recommend using an API Key. An API key can be easily changed in the case of the key being leaked. While user authentication can give access to your full account dashboard.

import requests
api_uri = 'https://api.massive.app/v1'
api_auth_route = '/auth'
auth_request = {
"email": "[email protected]",
"password": “yourpassword”
r = requests.post(
api_uri + api_auth_route,
auth_response = r.json()
auth_token = auth_response['token']
team_id = auth_response['teams'][0]['id']

print("Grabbed authentication token")

There’s a bit of boiler plate here to get us going. Line #1 has us importing the requests library. While line #3 has us defining the api uri. This is important as it allows us to easily change the default route in case of any future domain changes. Followed by the exact route for the authentication.

We fill in our username and password for our auth request. This is the username and password that you use to access the MASV dashboard. Finally moving on to actually making the request. With string concatenation we combine the api_uri and api_auth_route to make the full url https://api.massive.app/v1/auth and attach our auth_request variable as json.

We are feeling good today so we just assume the request went through and start grabbing variables (for an outline of all variables returned, please visit our documentation.)

The variables we need are:

  • auth_token: used to authenticate our requests

  • team_id: the team that we attach our requests and cloud connection

Sending an Authenticated Request

Great! We now have our user token, but how do we actually use it? In the case of MASV’s api, we are going to attach our token to our request headers. A simple bit of json will get us started so we can start using it throughout the script

team_id = auth_response['teams'][0]['id']
request_headers = {
"Content-Type": "application/json",
"X-User-Token": auth_token,

Setting up Automations with MASV Portals

Bit of a detour before we actually get started with Backblaze. Our goal here is to automatically transfer and archive media files to Backblaze. We can do that by using the Portal feature in MASV. You can think of a Portal as a bucket that all your files get stored in on MASV.

A portal can be created with the GUI or with a quick script. For this case we are going to use a quick script.

"X-User-Token": auth_token,

# Get portal id
api_portals_route = '/teams/{}/portals'.format(team_id)
# This request will fail if the subdomain
# is not changed. Uncomment the subdomain line
# and replace the `masvbackblaze` with your own
# for the request to succeed.
portal_request = {
"name": "masvtobackblaze",
"message": "Sending data to backblaze",
"subdomain": "masvbackblaze",
"recipients": [
"[email protected]"
"access_code": "your_access_code",
"has_access_code": True,
"download_password": "your_download_password",
"has_download_password": True,

r = requests.post(
api_uri + api_portals_route,
# We are creating something
# so we need to evaluate if it
# actually works
if r.status_code != 201:

portal = r.json()
portal_id = portal["id"]
print("Created Portal: {}".format(portal_id))

There is a ton to unpack here. We start by getting our api_portals_route this route interacts with any Portal requests that we may need to make. Python includes the lovely .format() function which allows us to include the team_id we fetched earlier into the uri.

A bit of boiler plate needs to be created for the actual response. We set some reasonable defaults. The subdomain is a unique field and will need to be unique for each person.

Our request also now utilizes our authentication token with the line headers=request_headers any request function requiring authentication from now on will include those headers.

We want to verify creation so we check for the 422 error that our api will respond with. If it already exists then it will output the error code and exit.

Note: For full info on the portal creation visit our documentation.

Connecting Backblaze

We are now good to create a Backblaze B2 connection. To do so we are going to create the actual cloud connection and attach it to the Portal we created in the previous step. This will automate the transfer and archive of video files uploaded into MASV to Backblaze.

portal_id = r.json()["id"]

api_cloudconnect_route = '/teams/{}/cloud_connections'.format(team_id)
backblaze_connect_request = {
"name": "backblazetomasv",
"provider": "backblazeb2",
"authorization": {
"client_id": "0005bbxxxxxxxxxxxx",
"client_secret": "K000xxxxxxxxxxx",
"destination": "team-bucket",
api_uri + api_cloudconnect_route,

print("Created cloud connection {}".format(r.json()['id']))

# We are creating something
# so we need to evaluate if it
# actually works
if r.status_code == 422:

cloudconnect_id = r.json()['id']

Much of this code should start to feel familiar. We start by creating the cloudconnect route on our api. Followed by some parameters for our Backblaze connection. The name field should be changed to match whatever naming convention your company is utilizing.

The authentication block is required and filled out with the credentials generated at Backblaze API Key.

# Attach to portal

api_cloudconnect_attach_route = "/portals/{}".format(portal_id)

api_cloudconnect_request = portal_request api_cloudconnect_request['active'] = True api_cloudconnect_request['configure_cloud_connections'] = True api_cloudconnect_request['cloud_connections'] = [{
"id": cloudconnect_id,
"active": True,
"target_action": "transfer"

api_uri + api_cloudconnect_attach_route,

print("Created portal connection")

Run the code with a final:

$ python3 backblaze.py

Which will output the following:

Grabbed authentication token 
Created cloud connection YOUR_CLOUD_CONNECTION_ID_HERE
Created portal connection

This output confirms that your script ran successfully and you now have a Portal created and attached to a Backblaze B2 bucket.

For a bit of extra validation that the Portal is connected, we can navigate to the Cloud Connect dashboard. Under the Cloud Connection page, you can see the script created a connection with the name backblazetomasv. A green ok status means everything is good to go.

The next step on your list is to physically transfer and archive those media files. A quick drag-and-drop to MASV and you can sign off for the night, knowing the data will be sent off and stored safely.

Your manager (the recipient you set above) will receive an email when they can download the file. Backblaze will have your file(s) ready in one of its buckets.

Manager Details

Want to find out how to transfer files by just moving them into a folder on your desktop? Check out our ongoing blog series about deploying TransferAgent in Docker. Or if you would love to see more of what you can do with our API check out our Dev Docs.

Did this answer your question?