{"__v":2,"_id":"56dac0483dede50b00eacb55","api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"body":"In 2014, Amazon Web Services released a groundbreaking service called [AWS Lambda](https://aws.amazon.com/lambda/) that offers a new way to deploy your *web*, *mobile* or *IoT* application's code to the cloud.  Instead of deploying the entire codebase of your app all at once, with Lambda you deploy each of your application's functions individually, to their own containers.  Overall, Lambda is groundbreaking for three reasons:\n\n* ##### **Pay-Per-Use Pricing**\nAWS Lambda charges you only when your functions are run.  No more monthly-based billing for servers, no more wasted dollars spent on under-utilized servers.  There is no such thing as under-utilization with AWS Lambda. \n\n* ##### **Unprecedented Agility**\nLambda offers ready to go containers and orchestration out-of-the-box, leaving you free to decide how you would like to containerize your logic.  You can choose a monolithic, microservices, or nanoservices approach with Lambda.  Read more about that in our [Overview](http://docs.serverless.com/docs/introducing-serverless).\n\n* ##### **No Servers**\nEvery Lambda function container auto-scales automatically when called concurrently.  Since your functions can scale massively out-of-the-box, you no longer have to think about scaling/managing servers, Amazon deals with that.  You focus only on building your product, wit.\n\nHowever... while AWS Lambda offers a powerful new way of developing/running applications, when we began building our first project based entirely on AWS Lambda, we realized structure was badly needed.  Managing all of the containers that Lambda introduces is a difficult task.  Add to that multi-developer teams, multi-stage and multi-region support and you will quickly get into a messy situation.\n\nThus, the [Serverless Framework](https://github.com/serverless/serverless) was born.  The first and most powerful framework for building applications exclusively on AWS Lambda.","category":"56dac0483dede50b00eacb51","createdAt":"2015-12-21T07:29:38.121Z","excerpt":"How the Serverless Framework Was Born","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":0,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"backstory","sync_unique":"","title":"Backstory","type":"basic","updates":["56bc374fb228ec0d00cb3fc8","57459c9111628d0e009a7873","5745a08943d4d41700a19e75"],"user":"5611c1e58c76a61900fd0739","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Backstory

How the Serverless Framework Was Born

In 2014, Amazon Web Services released a groundbreaking service called [AWS Lambda](https://aws.amazon.com/lambda/) that offers a new way to deploy your *web*, *mobile* or *IoT* application's code to the cloud. Instead of deploying the entire codebase of your app all at once, with Lambda you deploy each of your application's functions individually, to their own containers. Overall, Lambda is groundbreaking for three reasons: * ##### **Pay-Per-Use Pricing** AWS Lambda charges you only when your functions are run. No more monthly-based billing for servers, no more wasted dollars spent on under-utilized servers. There is no such thing as under-utilization with AWS Lambda. * ##### **Unprecedented Agility** Lambda offers ready to go containers and orchestration out-of-the-box, leaving you free to decide how you would like to containerize your logic. You can choose a monolithic, microservices, or nanoservices approach with Lambda. Read more about that in our [Overview](http://docs.serverless.com/docs/introducing-serverless). * ##### **No Servers** Every Lambda function container auto-scales automatically when called concurrently. Since your functions can scale massively out-of-the-box, you no longer have to think about scaling/managing servers, Amazon deals with that. You focus only on building your product, wit. However... while AWS Lambda offers a powerful new way of developing/running applications, when we began building our first project based entirely on AWS Lambda, we realized structure was badly needed. Managing all of the containers that Lambda introduces is a difficult task. Add to that multi-developer teams, multi-stage and multi-region support and you will quickly get into a messy situation. Thus, the [Serverless Framework](https://github.com/serverless/serverless) was born. The first and most powerful framework for building applications exclusively on AWS Lambda.
In 2014, Amazon Web Services released a groundbreaking service called [AWS Lambda](https://aws.amazon.com/lambda/) that offers a new way to deploy your *web*, *mobile* or *IoT* application's code to the cloud. Instead of deploying the entire codebase of your app all at once, with Lambda you deploy each of your application's functions individually, to their own containers. Overall, Lambda is groundbreaking for three reasons: * ##### **Pay-Per-Use Pricing** AWS Lambda charges you only when your functions are run. No more monthly-based billing for servers, no more wasted dollars spent on under-utilized servers. There is no such thing as under-utilization with AWS Lambda. * ##### **Unprecedented Agility** Lambda offers ready to go containers and orchestration out-of-the-box, leaving you free to decide how you would like to containerize your logic. You can choose a monolithic, microservices, or nanoservices approach with Lambda. Read more about that in our [Overview](http://docs.serverless.com/docs/introducing-serverless). * ##### **No Servers** Every Lambda function container auto-scales automatically when called concurrently. Since your functions can scale massively out-of-the-box, you no longer have to think about scaling/managing servers, Amazon deals with that. You focus only on building your product, wit. However... while AWS Lambda offers a powerful new way of developing/running applications, when we began building our first project based entirely on AWS Lambda, we realized structure was badly needed. Managing all of the containers that Lambda introduces is a difficult task. Add to that multi-developer teams, multi-stage and multi-region support and you will quickly get into a messy situation. Thus, the [Serverless Framework](https://github.com/serverless/serverless) was born. The first and most powerful framework for building applications exclusively on AWS Lambda.
{"__v":1,"_id":"56dac0483dede50b00eacb56","api":{"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","auth":"required","params":[],"url":""},"body":"Serverless is an application framework for building serverless web, mobile and IoT applications exclusively on [AWS Lambda](https://aws.amazon.com/lambda/).  A Serverless app can simply be a couple of lambda functions to accomplish some tasks, or an entire back-end comprised of hundreds of lambda functions. Serverless currently supports `nodejs` and `python2.7` runtimes. Support for `Java` and other future runtimes that AWS Lambda will support will be coming soon.\n\nServerless comes in the form of a Node.js command line interface that provides structure, automation and optimization to help you build and maintain Serverless apps.  The CLI allows you to control your Lambdas, API Gateway Endpoints as well as your AWS resources via AWS CloudFormation.  Overall, we've made a strong effort to make not just a groundbreaking Serverless framework, but the best framework for building applications with AWS in general (that is also Serverless!). As a result, Serverless incorporates years of AWS expertise into its tooling, giving you best practices out-of-the-box.  Serverless does not seek to conceal AWS in abstraction, but to put structure around the AWS SDK and CloudFormation, and approach Amazon Web Services and all that it offers from the focal point of Lambda.  In the future, we believe that Lambda will be the focal point of AWS.\n\nLastly, we work full time on this and are funded by a top tier VC firm in Silicon Valley.  We are here for the long-term to support developers building Serverless applications.  Don't hesitate to [email us](mailto:team@serverless.com) and ask us for help.  Also, we're growing our team.  If the Serverless architecture appeals to you or you just want to make awesome tools for other developers, please [let us know](mailto:team@serverless.com).","category":"56dac0483dede50b00eacb51","createdAt":"2015-10-16T17:26:09.813Z","excerpt":"What the Framework Does","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":1,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"introducing-serverless","sync_unique":"","title":"Overview","type":"basic","updates":["56a0f884e7056c170060b914","577d966a6172c7200012842e"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Overview

What the Framework Does

Serverless is an application framework for building serverless web, mobile and IoT applications exclusively on [AWS Lambda](https://aws.amazon.com/lambda/). A Serverless app can simply be a couple of lambda functions to accomplish some tasks, or an entire back-end comprised of hundreds of lambda functions. Serverless currently supports `nodejs` and `python2.7` runtimes. Support for `Java` and other future runtimes that AWS Lambda will support will be coming soon. Serverless comes in the form of a Node.js command line interface that provides structure, automation and optimization to help you build and maintain Serverless apps. The CLI allows you to control your Lambdas, API Gateway Endpoints as well as your AWS resources via AWS CloudFormation. Overall, we've made a strong effort to make not just a groundbreaking Serverless framework, but the best framework for building applications with AWS in general (that is also Serverless!). As a result, Serverless incorporates years of AWS expertise into its tooling, giving you best practices out-of-the-box. Serverless does not seek to conceal AWS in abstraction, but to put structure around the AWS SDK and CloudFormation, and approach Amazon Web Services and all that it offers from the focal point of Lambda. In the future, we believe that Lambda will be the focal point of AWS. Lastly, we work full time on this and are funded by a top tier VC firm in Silicon Valley. We are here for the long-term to support developers building Serverless applications. Don't hesitate to [email us](mailto:team@serverless.com) and ask us for help. Also, we're growing our team. If the Serverless architecture appeals to you or you just want to make awesome tools for other developers, please [let us know](mailto:team@serverless.com).
Serverless is an application framework for building serverless web, mobile and IoT applications exclusively on [AWS Lambda](https://aws.amazon.com/lambda/). A Serverless app can simply be a couple of lambda functions to accomplish some tasks, or an entire back-end comprised of hundreds of lambda functions. Serverless currently supports `nodejs` and `python2.7` runtimes. Support for `Java` and other future runtimes that AWS Lambda will support will be coming soon. Serverless comes in the form of a Node.js command line interface that provides structure, automation and optimization to help you build and maintain Serverless apps. The CLI allows you to control your Lambdas, API Gateway Endpoints as well as your AWS resources via AWS CloudFormation. Overall, we've made a strong effort to make not just a groundbreaking Serverless framework, but the best framework for building applications with AWS in general (that is also Serverless!). As a result, Serverless incorporates years of AWS expertise into its tooling, giving you best practices out-of-the-box. Serverless does not seek to conceal AWS in abstraction, but to put structure around the AWS SDK and CloudFormation, and approach Amazon Web Services and all that it offers from the focal point of Lambda. In the future, we believe that Lambda will be the focal point of AWS. Lastly, we work full time on this and are funded by a top tier VC firm in Silicon Valley. We are here for the long-term to support developers building Serverless applications. Don't hesitate to [email us](mailto:team@serverless.com) and ask us for help. Also, we're growing our team. If the Serverless architecture appeals to you or you just want to make awesome tools for other developers, please [let us know](mailto:team@serverless.com).
{"__v":2,"_id":"56dac0483dede50b00eacb57","api":{"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","auth":"required","params":[],"url":""},"body":"AWS gives you a ton of free resources whenever you create a new AWS account. This is called the free tier. It includes a massive allowance of free Lambda Requests, DynamoDB tables, S3 storage, and more. Before building Serverless apps, we strongly recommend starting with a fresh AWS account for maximum cost savings.\n\n# Creating an Administrative IAM User\nThe Serverless Framework is one of the first application frameworks to manage both your code and infrastructure.  To manage your infrastructure on AWS, we're going to create an Admin user which can access and configure the services in your AWS account.  To get you up and running quickly, we're going to create a AWS IAM User with Administrative Access to your AWS account.\n[block:callout]\n{\n  \"type\": \"danger\",\n  \"body\": \"In a production environment we recommend reducing the permissions to the IAM User which the Framework uses.  Unfortunately, the Framework's functionality is growing so fast, we can't yet offer you a finite set of permissions it needs.  In the interim, ensure that your AWS API Keys are kept in a safe, private location.\",\n  \"title\": \"Admin Access to Your AWS Account\"\n}\n[/block]\nNow let's create an Admin IAM user:\n\n* Create or login to your Amazon Web Services Account and go the the Identity & Access Management (IAM) Page.\n* Click on **Users** and then Create New Users. Enter *serverless-admin* in the first field and click **Create**.\n* View and copy the security credentials/API Keys in a safe place.\n* In the User record in the AWS IAM Dashboard, look for **Managed Policies** on the **Permissions** tab and click **Attach Policy**. In the next screen, search for and select **AdministratorAccess** then click **Attach**.\n\nWhen you go to create or install a Serverless Project, you will be prompted to enter your AWS Access Keys.  Once you enter them, they will be persisted to your Project's folder in the `admin.env` file.\n\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"AWS Profiles\",\n  \"body\": \"The Serverless Framework can also work with AWS Profiles already set on your computer.  In the Project Create or Project Install screens, it will detect your AWS Profiles and display them.  Select the one you want, and the AWS Access Keys for that profile will be persisted to your project.\"\n}\n[/block]","category":"56dac0483dede50b00eacb51","createdAt":"2015-10-16T19:57:25.686Z","excerpt":"Configuring AWS and Giving Serverless Access to Your Account","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":2,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"configuring-aws","sync_unique":"","title":"Configuring AWS","type":"basic","updates":["5629dbcb8437010d00c43bc5","567733077c62750d00dac222","56ca4df760118f0d00338226","56d23a0793f76e0b00bbc60f","572117cc69a3e40e00fb54c3","5728206690a5580e00419168"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Configuring AWS

Configuring AWS and Giving Serverless Access to Your Account

AWS gives you a ton of free resources whenever you create a new AWS account. This is called the free tier. It includes a massive allowance of free Lambda Requests, DynamoDB tables, S3 storage, and more. Before building Serverless apps, we strongly recommend starting with a fresh AWS account for maximum cost savings. # Creating an Administrative IAM User The Serverless Framework is one of the first application frameworks to manage both your code and infrastructure. To manage your infrastructure on AWS, we're going to create an Admin user which can access and configure the services in your AWS account. To get you up and running quickly, we're going to create a AWS IAM User with Administrative Access to your AWS account. [block:callout] { "type": "danger", "body": "In a production environment we recommend reducing the permissions to the IAM User which the Framework uses. Unfortunately, the Framework's functionality is growing so fast, we can't yet offer you a finite set of permissions it needs. In the interim, ensure that your AWS API Keys are kept in a safe, private location.", "title": "Admin Access to Your AWS Account" } [/block] Now let's create an Admin IAM user: * Create or login to your Amazon Web Services Account and go the the Identity & Access Management (IAM) Page. * Click on **Users** and then Create New Users. Enter *serverless-admin* in the first field and click **Create**. * View and copy the security credentials/API Keys in a safe place. * In the User record in the AWS IAM Dashboard, look for **Managed Policies** on the **Permissions** tab and click **Attach Policy**. In the next screen, search for and select **AdministratorAccess** then click **Attach**. When you go to create or install a Serverless Project, you will be prompted to enter your AWS Access Keys. Once you enter them, they will be persisted to your Project's folder in the `admin.env` file. [block:callout] { "type": "info", "title": "AWS Profiles", "body": "The Serverless Framework can also work with AWS Profiles already set on your computer. In the Project Create or Project Install screens, it will detect your AWS Profiles and display them. Select the one you want, and the AWS Access Keys for that profile will be persisted to your project." } [/block]
AWS gives you a ton of free resources whenever you create a new AWS account. This is called the free tier. It includes a massive allowance of free Lambda Requests, DynamoDB tables, S3 storage, and more. Before building Serverless apps, we strongly recommend starting with a fresh AWS account for maximum cost savings. # Creating an Administrative IAM User The Serverless Framework is one of the first application frameworks to manage both your code and infrastructure. To manage your infrastructure on AWS, we're going to create an Admin user which can access and configure the services in your AWS account. To get you up and running quickly, we're going to create a AWS IAM User with Administrative Access to your AWS account. [block:callout] { "type": "danger", "body": "In a production environment we recommend reducing the permissions to the IAM User which the Framework uses. Unfortunately, the Framework's functionality is growing so fast, we can't yet offer you a finite set of permissions it needs. In the interim, ensure that your AWS API Keys are kept in a safe, private location.", "title": "Admin Access to Your AWS Account" } [/block] Now let's create an Admin IAM user: * Create or login to your Amazon Web Services Account and go the the Identity & Access Management (IAM) Page. * Click on **Users** and then Create New Users. Enter *serverless-admin* in the first field and click **Create**. * View and copy the security credentials/API Keys in a safe place. * In the User record in the AWS IAM Dashboard, look for **Managed Policies** on the **Permissions** tab and click **Attach Policy**. In the next screen, search for and select **AdministratorAccess** then click **Attach**. When you go to create or install a Serverless Project, you will be prompted to enter your AWS Access Keys. Once you enter them, they will be persisted to your Project's folder in the `admin.env` file. [block:callout] { "type": "info", "title": "AWS Profiles", "body": "The Serverless Framework can also work with AWS Profiles already set on your computer. In the Project Create or Project Install screens, it will detect your AWS Profiles and display them. Select the one you want, and the AWS Access Keys for that profile will be persisted to your project." } [/block]
{"__v":8,"_id":"56dac0483dede50b00eacb58","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"Now that you've configured AWS, you're ready to start using the Serverless framework. To install, simply run the following command:\n\n```\nnpm install serverless -g\n```\n\nAfter it installs, you can create a new project:\n\n```\nserverless project create\n```\nThe Serverless CLI will ask for a few pieces of information about your project (name, domain, email...etc). Serverless uses this information to build up your stack with [CloudFormation](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html). This process takes around 3 mins.\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Provisioning CloudFormation\",\n  \"body\": \"If you would like to create the project structure without AWS resources being provisioned, use `serverless project create -c true`.  You can inspect the generated CloudFormation and execute when ready.  For more info, [read the Project Create docs below](/project-create).\"\n}\n[/block]\nNow you have a barebones project that is pretty much useless. To make it a more useful, let's create a function. Make sure you're in the root directory of your newly created project, and then run:\n\n```\nserverless function create functions/function1\n```\n\nYou'll be prompted to choose whether to create only a function, or create an endpoint or event along with your function. Choose \"Create Endpoint\" for this demo. This will create a `function1` function with one endpoint inside a folder called `functions`. You can create a function directly in the root of you project with `serverless function create function1`, but we recommend you group your functions in folders and subfolders.\n\nRun `serverless dash deploy` to open up the interactive deployment dashboard. Use the down arrow key to select the function and press enter.  Down arrow once more to select the endpoint and press the enter button again (so that both text items switch from grey to yellow). \nFinally, move to \"Deploy\" and hit enter.\n\nAfter deployment is complete, you will be given a URL. Visit this URL in your browser to see your new project deployed. Deployment couldn't get any simpler!\n\nYou now have a basic Serverless project, and you're ready to take a deep dive and explore the project structure.","category":"56dac0483dede50b00eacb51","createdAt":"2015-10-16T19:57:59.559Z","excerpt":"Installing the Serverless Framework and Creating Your First Serverless Project","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":3,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"installing-serverless","sync_unique":"","title":"Installing Serverless","type":"basic","updates":["56706db2e10ecb0d0004ef52","567074d93d29830d00376234","56a6dd887ef6620d00e2f26a","56a8decec0e02f0d007fa91e","56c14432f203270d00d6c544","56ce7d719636b713006b0c78","56ce851a9636b713006b0c7c","572d40376823a30e00df7f96","57459be7a488e817005418a9","577e74c2e6c745190080824b","5791a310180e233400f7102d","5798142117ced017003c4c08","57ab57ea39c2fd1900191864","57b360a99d021b170063d694"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Installing Serverless

Installing the Serverless Framework and Creating Your First Serverless Project

Now that you've configured AWS, you're ready to start using the Serverless framework. To install, simply run the following command: ``` npm install serverless -g ``` After it installs, you can create a new project: ``` serverless project create ``` The Serverless CLI will ask for a few pieces of information about your project (name, domain, email...etc). Serverless uses this information to build up your stack with [CloudFormation](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html). This process takes around 3 mins. [block:callout] { "type": "info", "title": "Provisioning CloudFormation", "body": "If you would like to create the project structure without AWS resources being provisioned, use `serverless project create -c true`. You can inspect the generated CloudFormation and execute when ready. For more info, [read the Project Create docs below](/project-create)." } [/block] Now you have a barebones project that is pretty much useless. To make it a more useful, let's create a function. Make sure you're in the root directory of your newly created project, and then run: ``` serverless function create functions/function1 ``` You'll be prompted to choose whether to create only a function, or create an endpoint or event along with your function. Choose "Create Endpoint" for this demo. This will create a `function1` function with one endpoint inside a folder called `functions`. You can create a function directly in the root of you project with `serverless function create function1`, but we recommend you group your functions in folders and subfolders. Run `serverless dash deploy` to open up the interactive deployment dashboard. Use the down arrow key to select the function and press enter. Down arrow once more to select the endpoint and press the enter button again (so that both text items switch from grey to yellow). Finally, move to "Deploy" and hit enter. After deployment is complete, you will be given a URL. Visit this URL in your browser to see your new project deployed. Deployment couldn't get any simpler! You now have a basic Serverless project, and you're ready to take a deep dive and explore the project structure.
Now that you've configured AWS, you're ready to start using the Serverless framework. To install, simply run the following command: ``` npm install serverless -g ``` After it installs, you can create a new project: ``` serverless project create ``` The Serverless CLI will ask for a few pieces of information about your project (name, domain, email...etc). Serverless uses this information to build up your stack with [CloudFormation](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html). This process takes around 3 mins. [block:callout] { "type": "info", "title": "Provisioning CloudFormation", "body": "If you would like to create the project structure without AWS resources being provisioned, use `serverless project create -c true`. You can inspect the generated CloudFormation and execute when ready. For more info, [read the Project Create docs below](/project-create)." } [/block] Now you have a barebones project that is pretty much useless. To make it a more useful, let's create a function. Make sure you're in the root directory of your newly created project, and then run: ``` serverless function create functions/function1 ``` You'll be prompted to choose whether to create only a function, or create an endpoint or event along with your function. Choose "Create Endpoint" for this demo. This will create a `function1` function with one endpoint inside a folder called `functions`. You can create a function directly in the root of you project with `serverless function create function1`, but we recommend you group your functions in folders and subfolders. Run `serverless dash deploy` to open up the interactive deployment dashboard. Use the down arrow key to select the function and press enter. Down arrow once more to select the endpoint and press the enter button again (so that both text items switch from grey to yellow). Finally, move to "Deploy" and hit enter. After deployment is complete, you will be given a URL. Visit this URL in your browser to see your new project deployed. Deployment couldn't get any simpler! You now have a basic Serverless project, and you're ready to take a deep dive and explore the project structure.
{"__v":14,"_id":"56dac0493dede50b00eacb71","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"A basic Serverless project contains the following directory structure: \n```\ns-project.json\ns-resources-cf.json\nadmin.env\n_meta\n    |__resources\n         |__s-resources-cf-dev-useast1.json\n    |__variables\n         |__s-variables-common.json\n         |__s-variables-dev.json\n         |__s-variables-dev-useast1.json\nfunctions\n    |__function1\n         |__event.json\n         |__handler.js\n         |__s-function.json\n```\nHere's the same directory structure with some explanation:\n```\ns-project.json                // project and author data\ns-resources-cf.json       // CloudFormation template for all stages/regions\nadmin.env                     // AWS Profiles - gitignored)\n_meta                          // meta data that holds stage/regions config and variables - gitignored\n    |__resources             // final CF templates for each stage/region\n         |__s-resources-cf-dev-useast1.json\n    |__variables              // variables specific to stages and regions\n         |__s-variables-common.json\n         |__s-variables-dev.json\n         |__s-variables-dev-useast1.json\nfunctions                      // folder to group your project functions\n    |__function1            // your first function\n         |__event.json      // sample event for testing function locally\n         |__handler.js      // your function handler file\n         |__s-function.json   // data for your lambda function, endpoints and event sources\n```\nNow let's dive deeper into the most critical pieces of a Serverless project:\n\n# Project\nEach Serverless Project contains an `s-project.json` file that looks like this:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\t{\\n  \\\"name\\\": \\\"projectName\\\",\\n  \\\"custom\\\": {}, // For plugin authors to add any properties that they need\\n  \\\"plugins\\\": [] // List of plugins used by this project\\n}\",\n      \"language\": \"json\",\n      \"name\": \"s-project.json\"\n    }\n  ]\n}\n[/block]\n# Meta Data\nEach Serverless project contains a `_meta` folder in its root directory. This folder holds user specific project data, like stages, regions, CloudFormation template files and variables (more on variables later). Since this folder contains sensitive information, it's gitignored by default, allowing you to share your Serverless projects with others, where they can add their own meta data.\n\n# Functions\nFunctions are the core of a Serverless project. These are the functions that get deployed to AWS Lambda. You can organize your project functions however you like. We recommend you group all of your project functions in a `functions` folder in the root of your project. You can make it even more organized with more nesting and subfolders inside that `functions` folder. For simple projects, you can put your functions directly in the root of your project. It's completely flexible.\n\nEach function can have several endpoints, and each endpoint can have several methods (ie. GET, POST...etc). These are the endpoints that get deployed to AWS API Gateway and they all point to the Function that they're defined within. Functions can also have several event sources (i.e. DynamoDB, S3..etc). You can configure your Lambda Function, its Endpoints and Events in the `s-function.json` file:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"name\\\": \\\"functionName\\\",\\n  \\\"customName\\\": false,  // Custom name for your deployed Lambda function\\n  \\\"customRole\\\": false,  // Custom IAM Role for your deployed Lambda function\\n  \\\"handler\\\": \\\"function1/handler.handler\\\", // path of the handler relative to the function root\\n  \\\"runtime\\\": \\\"nodejs\\\",\\n  \\\"description\\\": \\\"some description for your lambda\\\",\\n  \\\"timeout\\\": 6,\\n  \\\"memorySize\\\": 1024,\\n  \\\"custom\\\": {\\n    \\\"excludePatterns\\\": [] // an array of whatever you don't want to deploy with the function\\n  },\\n  \\\"environment\\\": { // env vars needed by your function. Makes use of Serverless variables\\n    \\\"SOME_ENV_VAR\\\": \\\"${envVarValue}\\\"\\n  },\\n  \\\"events\\\": [ // event sources for this lambda\\n    {\\n      \\\"name\\\" : \\\"myEventSource\\\", // unique name for this event source\\n      \\\"type\\\": \\\"schedule\\\", // type of event source\\n      \\\"config\\\": {\\n         \\\"schedule\\\": \\\"rate(5 minutes)\\\",\\n          \\\"enabled\\\": true\\n     }\\n    }\\n], \\n  \\\"endpoints\\\": [ // an array of endpoints that will invoke this lambda function\\n    {\\n      \\\"path\\\": \\\"function1\\\",\\n      \\\"method\\\": \\\"GET\\\",\\n      \\\"authorizationType\\\": \\\"none\\\",\\n      \\\"apiKeyRequired\\\": false,\\n      \\\"requestParameters\\\": {},\\n      \\\"requestTemplates\\\": {\\n        \\\"application/json\\\": \\\"\\\"\\n      },\\n      \\\"responses\\\": {\\n        \\\"400\\\": {\\n          \\\"selectionPattern\\\": \\\"^\\\\\\\\[BadRequest\\\\\\\\].*\\\", // selectionPattern is mapped to the Lambda Error Regex\\n          \\\"statusCode\\\": \\\"400\\\" // HTTP Status that is returned as part of the regex matching\\n        },\\n        \\\"403\\\": {\\n          \\\"selectionPattern\\\": \\\"^\\\\\\\\[Forbidden\\\\\\\\].*\\\",\\n          \\\"statusCode\\\": \\\"403\\\"\\n        },\\n        \\\"404\\\": {\\n          \\\"selectionPattern\\\": \\\"^\\\\\\\\[NotFound\\\\\\\\].*\\\",\\n          \\\"statusCode\\\": \\\"404\\\"\\n        },\\n        \\\"default\\\": {\\n          \\\"statusCode\\\": \\\"200\\\",\\n          \\\"responseParameters\\\": {},\\n          \\\"responseModels\\\": {},\\n          \\\"responseTemplates\\\": {\\n            \\\"application/json\\\": \\\"\\\"\\n          }\\n        }\\n      }\\n    }\\n  ],\\n  \\\"vpc\\\": {\\n    \\\"securityGroupIds\\\": [],\\n    \\\"subnetIds\\\": []\\n  }\\n}\\t\",\n      \"language\": \"json\",\n      \"name\": \"s-function.json\"\n    }\n  ]\n}\n[/block]\nFor more information about the `s-function.json` file, check out the [Function Configuration section](/docs/function-configuration).","category":"56dac0483dede50b00eacb52","createdAt":"2015-10-22T20:13:08.090Z","excerpt":"How Serverless Projects Look Like","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":0,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"project-structure","sync_unique":"","title":"Project Structure","type":"basic","updates":["567345d48565060d009a863f","56a6dcb448120d21000e52d6","56a9bb24f834950d0037b396","56b8d89143bbd10d0081d1c5","56f1ba500157b9200072df3f","56f23532e2c04c0e0097e5cb","56f2a22a2344ff0e006c014b","571980fb4604d417003d25a8","572e2ef88285700e00c93acb","5771950b01e8110e0041acc6","57ab5a4eb5c9591700b87796"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Project Structure

How Serverless Projects Look Like

A basic Serverless project contains the following directory structure: ``` s-project.json s-resources-cf.json admin.env _meta |__resources |__s-resources-cf-dev-useast1.json |__variables |__s-variables-common.json |__s-variables-dev.json |__s-variables-dev-useast1.json functions |__function1 |__event.json |__handler.js |__s-function.json ``` Here's the same directory structure with some explanation: ``` s-project.json // project and author data s-resources-cf.json // CloudFormation template for all stages/regions admin.env // AWS Profiles - gitignored) _meta // meta data that holds stage/regions config and variables - gitignored |__resources // final CF templates for each stage/region |__s-resources-cf-dev-useast1.json |__variables // variables specific to stages and regions |__s-variables-common.json |__s-variables-dev.json |__s-variables-dev-useast1.json functions // folder to group your project functions |__function1 // your first function |__event.json // sample event for testing function locally |__handler.js // your function handler file |__s-function.json // data for your lambda function, endpoints and event sources ``` Now let's dive deeper into the most critical pieces of a Serverless project: # Project Each Serverless Project contains an `s-project.json` file that looks like this: [block:code] { "codes": [ { "code": "\t{\n \"name\": \"projectName\",\n \"custom\": {}, // For plugin authors to add any properties that they need\n \"plugins\": [] // List of plugins used by this project\n}", "language": "json", "name": "s-project.json" } ] } [/block] # Meta Data Each Serverless project contains a `_meta` folder in its root directory. This folder holds user specific project data, like stages, regions, CloudFormation template files and variables (more on variables later). Since this folder contains sensitive information, it's gitignored by default, allowing you to share your Serverless projects with others, where they can add their own meta data. # Functions Functions are the core of a Serverless project. These are the functions that get deployed to AWS Lambda. You can organize your project functions however you like. We recommend you group all of your project functions in a `functions` folder in the root of your project. You can make it even more organized with more nesting and subfolders inside that `functions` folder. For simple projects, you can put your functions directly in the root of your project. It's completely flexible. Each function can have several endpoints, and each endpoint can have several methods (ie. GET, POST...etc). These are the endpoints that get deployed to AWS API Gateway and they all point to the Function that they're defined within. Functions can also have several event sources (i.e. DynamoDB, S3..etc). You can configure your Lambda Function, its Endpoints and Events in the `s-function.json` file: [block:code] { "codes": [ { "code": "{\n \"name\": \"functionName\",\n \"customName\": false, // Custom name for your deployed Lambda function\n \"customRole\": false, // Custom IAM Role for your deployed Lambda function\n \"handler\": \"function1/handler.handler\", // path of the handler relative to the function root\n \"runtime\": \"nodejs\",\n \"description\": \"some description for your lambda\",\n \"timeout\": 6,\n \"memorySize\": 1024,\n \"custom\": {\n \"excludePatterns\": [] // an array of whatever you don't want to deploy with the function\n },\n \"environment\": { // env vars needed by your function. Makes use of Serverless variables\n \"SOME_ENV_VAR\": \"${envVarValue}\"\n },\n \"events\": [ // event sources for this lambda\n {\n \"name\" : \"myEventSource\", // unique name for this event source\n \"type\": \"schedule\", // type of event source\n \"config\": {\n \"schedule\": \"rate(5 minutes)\",\n \"enabled\": true\n }\n }\n], \n \"endpoints\": [ // an array of endpoints that will invoke this lambda function\n {\n \"path\": \"function1\",\n \"method\": \"GET\",\n \"authorizationType\": \"none\",\n \"apiKeyRequired\": false,\n \"requestParameters\": {},\n \"requestTemplates\": {\n \"application/json\": \"\"\n },\n \"responses\": {\n \"400\": {\n \"selectionPattern\": \"^\\\\[BadRequest\\\\].*\", // selectionPattern is mapped to the Lambda Error Regex\n \"statusCode\": \"400\" // HTTP Status that is returned as part of the regex matching\n },\n \"403\": {\n \"selectionPattern\": \"^\\\\[Forbidden\\\\].*\",\n \"statusCode\": \"403\"\n },\n \"404\": {\n \"selectionPattern\": \"^\\\\[NotFound\\\\].*\",\n \"statusCode\": \"404\"\n },\n \"default\": {\n \"statusCode\": \"200\",\n \"responseParameters\": {},\n \"responseModels\": {},\n \"responseTemplates\": {\n \"application/json\": \"\"\n }\n }\n }\n }\n ],\n \"vpc\": {\n \"securityGroupIds\": [],\n \"subnetIds\": []\n }\n}\t", "language": "json", "name": "s-function.json" } ] } [/block] For more information about the `s-function.json` file, check out the [Function Configuration section](/docs/function-configuration).
A basic Serverless project contains the following directory structure: ``` s-project.json s-resources-cf.json admin.env _meta |__resources |__s-resources-cf-dev-useast1.json |__variables |__s-variables-common.json |__s-variables-dev.json |__s-variables-dev-useast1.json functions |__function1 |__event.json |__handler.js |__s-function.json ``` Here's the same directory structure with some explanation: ``` s-project.json // project and author data s-resources-cf.json // CloudFormation template for all stages/regions admin.env // AWS Profiles - gitignored) _meta // meta data that holds stage/regions config and variables - gitignored |__resources // final CF templates for each stage/region |__s-resources-cf-dev-useast1.json |__variables // variables specific to stages and regions |__s-variables-common.json |__s-variables-dev.json |__s-variables-dev-useast1.json functions // folder to group your project functions |__function1 // your first function |__event.json // sample event for testing function locally |__handler.js // your function handler file |__s-function.json // data for your lambda function, endpoints and event sources ``` Now let's dive deeper into the most critical pieces of a Serverless project: # Project Each Serverless Project contains an `s-project.json` file that looks like this: [block:code] { "codes": [ { "code": "\t{\n \"name\": \"projectName\",\n \"custom\": {}, // For plugin authors to add any properties that they need\n \"plugins\": [] // List of plugins used by this project\n}", "language": "json", "name": "s-project.json" } ] } [/block] # Meta Data Each Serverless project contains a `_meta` folder in its root directory. This folder holds user specific project data, like stages, regions, CloudFormation template files and variables (more on variables later). Since this folder contains sensitive information, it's gitignored by default, allowing you to share your Serverless projects with others, where they can add their own meta data. # Functions Functions are the core of a Serverless project. These are the functions that get deployed to AWS Lambda. You can organize your project functions however you like. We recommend you group all of your project functions in a `functions` folder in the root of your project. You can make it even more organized with more nesting and subfolders inside that `functions` folder. For simple projects, you can put your functions directly in the root of your project. It's completely flexible. Each function can have several endpoints, and each endpoint can have several methods (ie. GET, POST...etc). These are the endpoints that get deployed to AWS API Gateway and they all point to the Function that they're defined within. Functions can also have several event sources (i.e. DynamoDB, S3..etc). You can configure your Lambda Function, its Endpoints and Events in the `s-function.json` file: [block:code] { "codes": [ { "code": "{\n \"name\": \"functionName\",\n \"customName\": false, // Custom name for your deployed Lambda function\n \"customRole\": false, // Custom IAM Role for your deployed Lambda function\n \"handler\": \"function1/handler.handler\", // path of the handler relative to the function root\n \"runtime\": \"nodejs\",\n \"description\": \"some description for your lambda\",\n \"timeout\": 6,\n \"memorySize\": 1024,\n \"custom\": {\n \"excludePatterns\": [] // an array of whatever you don't want to deploy with the function\n },\n \"environment\": { // env vars needed by your function. Makes use of Serverless variables\n \"SOME_ENV_VAR\": \"${envVarValue}\"\n },\n \"events\": [ // event sources for this lambda\n {\n \"name\" : \"myEventSource\", // unique name for this event source\n \"type\": \"schedule\", // type of event source\n \"config\": {\n \"schedule\": \"rate(5 minutes)\",\n \"enabled\": true\n }\n }\n], \n \"endpoints\": [ // an array of endpoints that will invoke this lambda function\n {\n \"path\": \"function1\",\n \"method\": \"GET\",\n \"authorizationType\": \"none\",\n \"apiKeyRequired\": false,\n \"requestParameters\": {},\n \"requestTemplates\": {\n \"application/json\": \"\"\n },\n \"responses\": {\n \"400\": {\n \"selectionPattern\": \"^\\\\[BadRequest\\\\].*\", // selectionPattern is mapped to the Lambda Error Regex\n \"statusCode\": \"400\" // HTTP Status that is returned as part of the regex matching\n },\n \"403\": {\n \"selectionPattern\": \"^\\\\[Forbidden\\\\].*\",\n \"statusCode\": \"403\"\n },\n \"404\": {\n \"selectionPattern\": \"^\\\\[NotFound\\\\].*\",\n \"statusCode\": \"404\"\n },\n \"default\": {\n \"statusCode\": \"200\",\n \"responseParameters\": {},\n \"responseModels\": {},\n \"responseTemplates\": {\n \"application/json\": \"\"\n }\n }\n }\n }\n ],\n \"vpc\": {\n \"securityGroupIds\": [],\n \"subnetIds\": []\n }\n}\t", "language": "json", "name": "s-function.json" } ] } [/block] For more information about the `s-function.json` file, check out the [Function Configuration section](/docs/function-configuration).
{"__v":4,"_id":"56dac0493dede50b00eacb73","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"One of the strengths of the Serverless framework is it gives you the freedom to containerize/isolate your logic any way you'd like. This is possible due to simple nesting of folders: `project/functions/subfolder/functions`.  They allow the following architectures and patterns to be available to you in the framework:\n\n###Monolithic \nIf you choose, you can contain all of your logic into a single Lambda function.  Then add multiple endpoints or events to that single Lambda function. In the Framework, you could do this by creating a single `function`.\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Consider Using GraphQL\",\n  \"body\": \"If you're making a REST API, consider using [GraphQL](https://github.com/graphql/graphql-js) in front of your databases to reduce the number of endpoints you need.  GraphQL may make Monolithic approaches viable again.\\n\\nHere's an [example Serverless Blog REST API](https://github.com/serverless/serverless-graphql-blog) that has only one endpoint, powered by GraphQL.\"\n}\n[/block]\n###Microservices \nWith this pattern, you can organize your functions in folders (e.g. `restApi` folder) and subfolders according to business logic. You can think of each subfolder as a `resource` (e.g. `restApi/users/`). It's a common pattern to have only a single Lambda function in a `resource`, which can handle all logic for whatever that `resource` is dedicated to.\n\n* `restApi/users/usersAll`\n* `restApi/posts/postsAll`\n* `restApi/comments/commentsAll`\n\nFor example, a REST API could have a `users` resource containing one 'all' function that handles all actions for `users`.  Assign endpoints for `create, read, update, delete` to the `all` function.  API Gateway can pass the METHOD and PATH into your function via the `event` object, so you can determine inside your function how to handle/route incoming request.\n\n###Nanoservices \nFor the most agile solution, you can create a functions for every single endpoint and event you have.\n\nFor example, a REST API could have a `users` subfolder/resource containing multiple functions for `create, read, update, delete`.  \n\n* `restApi/users/createUser`\n* `restApi/users/readUser`\n* `restApi/users/updateUser`\n* `restApi/users/deleteUser`\n\nThis way you can update/iterate on each endpoint or event individually, without affecting any other parts of your application.  Like your `restApi/users/create` function. \n\n###Mixing\nYou can always mix the above approaches:\n\n* `restApi/users/usersAll`\n* `restApi/posts/createPost`\n* `restApi/posts/readPost`\n* `restApi/posts/updatePost`\n* `restApi/posts/deletePost`","category":"56dac0483dede50b00eacb52","createdAt":"2016-01-29T17:54:33.300Z","excerpt":"Determining How to Containerize Your Logic","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":1,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"application-architectures","sync_unique":"","title":"Project Architectures","type":"basic","updates":["56f2344bd6a20e0e00aa2127","572e32166922930e00a8b102","57ab5bcab5e8742000e17e6f"],"user":"5611c1e58c76a61900fd0739","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Project Architectures

Determining How to Containerize Your Logic

One of the strengths of the Serverless framework is it gives you the freedom to containerize/isolate your logic any way you'd like. This is possible due to simple nesting of folders: `project/functions/subfolder/functions`. They allow the following architectures and patterns to be available to you in the framework: ###Monolithic If you choose, you can contain all of your logic into a single Lambda function. Then add multiple endpoints or events to that single Lambda function. In the Framework, you could do this by creating a single `function`. [block:callout] { "type": "info", "title": "Consider Using GraphQL", "body": "If you're making a REST API, consider using [GraphQL](https://github.com/graphql/graphql-js) in front of your databases to reduce the number of endpoints you need. GraphQL may make Monolithic approaches viable again.\n\nHere's an [example Serverless Blog REST API](https://github.com/serverless/serverless-graphql-blog) that has only one endpoint, powered by GraphQL." } [/block] ###Microservices With this pattern, you can organize your functions in folders (e.g. `restApi` folder) and subfolders according to business logic. You can think of each subfolder as a `resource` (e.g. `restApi/users/`). It's a common pattern to have only a single Lambda function in a `resource`, which can handle all logic for whatever that `resource` is dedicated to. * `restApi/users/usersAll` * `restApi/posts/postsAll` * `restApi/comments/commentsAll` For example, a REST API could have a `users` resource containing one 'all' function that handles all actions for `users`. Assign endpoints for `create, read, update, delete` to the `all` function. API Gateway can pass the METHOD and PATH into your function via the `event` object, so you can determine inside your function how to handle/route incoming request. ###Nanoservices For the most agile solution, you can create a functions for every single endpoint and event you have. For example, a REST API could have a `users` subfolder/resource containing multiple functions for `create, read, update, delete`. * `restApi/users/createUser` * `restApi/users/readUser` * `restApi/users/updateUser` * `restApi/users/deleteUser` This way you can update/iterate on each endpoint or event individually, without affecting any other parts of your application. Like your `restApi/users/create` function. ###Mixing You can always mix the above approaches: * `restApi/users/usersAll` * `restApi/posts/createPost` * `restApi/posts/readPost` * `restApi/posts/updatePost` * `restApi/posts/deletePost`
One of the strengths of the Serverless framework is it gives you the freedom to containerize/isolate your logic any way you'd like. This is possible due to simple nesting of folders: `project/functions/subfolder/functions`. They allow the following architectures and patterns to be available to you in the framework: ###Monolithic If you choose, you can contain all of your logic into a single Lambda function. Then add multiple endpoints or events to that single Lambda function. In the Framework, you could do this by creating a single `function`. [block:callout] { "type": "info", "title": "Consider Using GraphQL", "body": "If you're making a REST API, consider using [GraphQL](https://github.com/graphql/graphql-js) in front of your databases to reduce the number of endpoints you need. GraphQL may make Monolithic approaches viable again.\n\nHere's an [example Serverless Blog REST API](https://github.com/serverless/serverless-graphql-blog) that has only one endpoint, powered by GraphQL." } [/block] ###Microservices With this pattern, you can organize your functions in folders (e.g. `restApi` folder) and subfolders according to business logic. You can think of each subfolder as a `resource` (e.g. `restApi/users/`). It's a common pattern to have only a single Lambda function in a `resource`, which can handle all logic for whatever that `resource` is dedicated to. * `restApi/users/usersAll` * `restApi/posts/postsAll` * `restApi/comments/commentsAll` For example, a REST API could have a `users` resource containing one 'all' function that handles all actions for `users`. Assign endpoints for `create, read, update, delete` to the `all` function. API Gateway can pass the METHOD and PATH into your function via the `event` object, so you can determine inside your function how to handle/route incoming request. ###Nanoservices For the most agile solution, you can create a functions for every single endpoint and event you have. For example, a REST API could have a `users` subfolder/resource containing multiple functions for `create, read, update, delete`. * `restApi/users/createUser` * `restApi/users/readUser` * `restApi/users/updateUser` * `restApi/users/deleteUser` This way you can update/iterate on each endpoint or event individually, without affecting any other parts of your application. Like your `restApi/users/create` function. ###Mixing You can always mix the above approaches: * `restApi/users/usersAll` * `restApi/posts/createPost` * `restApi/posts/readPost` * `restApi/posts/updatePost` * `restApi/posts/deletePost`
{"__v":10,"_id":"56dac0493dede50b00eacb74","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"Serverless Projects use configuration files written in JSON which can get big, and they sometimes need to include dynamic values that change for stages and regions (ie. ARNs).  \n\nTo reduce redundancy in the config files, we created Project Templates.  To allow for dynamic values in the config files, we created Project Variables.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Templates & Variables are for Configuration Only\",\n  \"body\": \"Templates and variables are used for configuration of the project only. This information is not usable in your lambda functions. To set variables which can be used by your lambda functions, use environment variables.\"\n}\n[/block]\n## Templates\nTemplates are variables containing objects, arrays or strings.  These dramatically reduce redundancy in your configuration files, since you can use the same template in multiple places.\n\nHere is an example of a template that can be used as [API Gateway Mapping Template](http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html)\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"apiGatewayRequestTemplate\\\": {\\n    \\\"application/json\\\": {\\n      \\\"body\\\": \\\"$input.json('$')\\\",\\n      \\\"pathParams\\\" : \\\"$input.params().path\\\",\\n      \\\"queryParams\\\" : \\\"$input.params().querystring\\\"\\n    }\\n  }\\n}\",\n      \"language\": \"json\",\n      \"name\": \"s-templates.json\"\n    }\n  ]\n}\n[/block]\nOr, it can be a YAML file instead of JSON:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"  apiGatewayRequestTemplate: \\n    application/json: \\n      body: \\\"$input.json('$')\\\"\\n      pathParams: \\\"$input.params().path\\\"\\n      queryParams: \\\"$input.params().querystring\\\"\",\n      \"language\": \"yaml\",\n      \"name\": \"s-templates.yaml\"\n    }\n  ]\n}\n[/block]\nYou can add specify templates in your `s-project.json, s-resources-cf.json, and s-function.json` configuration files by enclosing them in `$${}`, like this:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"requestTemplates\\\": \\\"$${apiGatewayRequestTemplate}\\\"\",\n      \"language\": \"json\",\n      \"name\": \"s-function.json\"\n    }\n  ]\n}\n[/block]\nTo define templates in your project, add them to `s-templates.json` files to the root of your project, subfolders, or function folders. \n\n```\nproject\n|_ s-templates.json\n    subfolder\n    |_ s-templates.json\n        function\n        |_ s-templates.json\n```\nYou can add them wherever you think is best.  The reason you can add template files in multiple folders is because templates defined in parent folders can be extended by templates defined in subfolders.\n\nLet's put the examples above together.  Say we have two template files that extend the template  \"apiGatewayRequestTemplate\": \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"apiGatewayRequestTemplate\\\": {\\n    \\\"application/json\\\": {\\n      \\\"body\\\": \\\"$input.json('$')\\\",\\n      \\\"pathParams\\\" : \\\"$input.params().path\\\",\\n      \\\"queryParams\\\" : \\\"$input.params().querystring\\\"\\n    }\\n  }\\n}\",\n      \"language\": \"json\",\n      \"name\": \"project/s-templates.json\"\n    }\n  ]\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"apiGatewayRequestTemplate\\\": {\\n    \\\"application/json\\\": {\\n      \\\"pathId\\\": \\\"$input.params('id')\\\"\\n    }\\n  }\\n}\",\n      \"language\": \"json\",\n      \"name\": \"project/functions/function/s-templates.json\"\n    }\n  ]\n}\n[/block]\nThis template is used for an endpoint's [API Gateway Mapping Template](http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html).  In a Serverless project, this information is specified in the `s-function.json` containing the endpoint, like this:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"name\\\": \\\"show\\\",\\n  \\\"handler\\\": \\\"multi/show/handler.handler\\\",\\n  \\\"runtime\\\": \\\"nodejs\\\",\\n  \\\"timeout\\\": 6,\\n  \\\"memorySize\\\": 256,\\n  \\\"custom\\\": {},\\n  \\\"endpoints\\\": [\\n    {\\n      \\\"path\\\": \\\"users/show/{id}\\\",\\n      \\\"method\\\": \\\"GET\\\",\\n      \\\"authorizationType\\\": \\\"none\\\",\\n      \\\"apiKeyRequired\\\": false,\\n      \\\"requestParameters\\\": {},\\n      \\\"requestTemplates\\\": \\\"$${apiGatewayRequestTemplate}\\\",\\n      \\\"responses\\\": {}\\n    }\\n  ]\\n}\",\n      \"language\": \"json\",\n      \"name\": \"project/functions/function/s-function.json\"\n    }\n  ]\n}\n[/block]\n\nThe `apiGatewayRequestTemplate` template in the endpoint syntax above would be populated with the combination of both of the above templates.\n\nAlso, you can extend a template in a parent folder by templates in multiple subfolders at the same time.  Each subfolders will get an aggregated template that contains unique values, depending on what values they extended the parent template with. \n\nLastly, you can put templates in your `s-resources-cf.json` file as well.  Project Variables (described below) can also be included in templates.  The Framework first populates all templates, then populates all variables.\n\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Templates For Function Names\",\n  \"body\": \"Create custom Function Name Templates by putting a `\\\"functionName\\\": \\\"${project}-${stage}-${name}\\\"` key in `s-templates.json` in the root of your project.  `${name}` is a reserved Project Variable and is always populated with the name property in the current configuration file.  Then in `s-function.json`, put this: `\\\"customName\\\": \\\"$${functionName}\\\"`\"\n}\n[/block]\n## Variables\nVariables hold strings or integers.  They enable you to add dynamic values to your configuration files that change with each project stage and region.  This is great for AWS Account specific data that changes across your stages and regions, like ARNs.  \n\nAll of your variables are defined inside the `_meta/variables` folder (the `_meta` folder is gitgnored by default). Inside this variables folder you'll find a JSON file for each stage and each region for that stage.\n\nHere's an example of defining a new variable in the `us-east-1` region in the `dev` stage. If you open the `s-variables-dev-useast1.json` file, you'll find the following default variables which are used by our framework:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"region\\\": \\\"us-east-1\\\",\\n  \\\"resourcesStackName\\\": \\\"projectName-dev-r\\\",\\n  \\\"iamRoleArnLambda\\\": \\\"arn:aws:iam::AWSaccount:role/projectName-dev-r-IamRoleLambda-someRole\\\"\\n}\",\n      \"language\": \"json\",\n      \"name\": \"s-variables-dev-useast1.json\"\n    }\n  ]\n}\n[/block]\nAs you can see, these values are specific to your project and your AWS account.\n\nYou can now reference this variable in any of your project's configuration files by enclosing it in `${}` (just like with templates, but with a single $ sign). For example, here is a variable being used in the CloudFormation syntax in your `s-resources-cf.json` file:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"IamPolicyLambda\\\": {\\n        \\\"Type\\\": \\\"AWS::IAM::Policy\\\",\\n        \\\"Properties\\\": {\\n          \\\"PolicyName\\\": \\\"${region}-lambda\\\",\\n          ...\\n        }\\n}\",\n      \"language\": \"json\",\n      \"name\": \"s-resources-cf.json\"\n    }\n  ]\n}\n[/block]\nYou can use also use variables in `s-template.json` files.  The framework first populates all templates, then populates all variables.","category":"56dac0483dede50b00eacb52","createdAt":"2016-01-13T20:08:09.084Z","excerpt":"Templates offer reusable configuration syntax, Variables offer dynamic configuration values","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":2,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"templates-variables","sync_unique":"","title":"Templates & Variables","type":"basic","updates":["56a07f4c4583912300b5f01b","56a0e554470ae00d00c30573","56c26286ddcb5119004cab70","56c3b4d434df460d00c2bea6","56cf3235336aa60b0086a1fb","56da86353dede50b00eacb3c","56f2c49ed67b16190084cd24","5720b508db52d01700f5d1a6","572e33bf6922930e00a8b103","5730bb070f929f3600841c51"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Templates & Variables

Templates offer reusable configuration syntax, Variables offer dynamic configuration values

Serverless Projects use configuration files written in JSON which can get big, and they sometimes need to include dynamic values that change for stages and regions (ie. ARNs). To reduce redundancy in the config files, we created Project Templates. To allow for dynamic values in the config files, we created Project Variables. [block:callout] { "type": "warning", "title": "Templates & Variables are for Configuration Only", "body": "Templates and variables are used for configuration of the project only. This information is not usable in your lambda functions. To set variables which can be used by your lambda functions, use environment variables." } [/block] ## Templates Templates are variables containing objects, arrays or strings. These dramatically reduce redundancy in your configuration files, since you can use the same template in multiple places. Here is an example of a template that can be used as [API Gateway Mapping Template](http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html) [block:code] { "codes": [ { "code": "{\n \"apiGatewayRequestTemplate\": {\n \"application/json\": {\n \"body\": \"$input.json('$')\",\n \"pathParams\" : \"$input.params().path\",\n \"queryParams\" : \"$input.params().querystring\"\n }\n }\n}", "language": "json", "name": "s-templates.json" } ] } [/block] Or, it can be a YAML file instead of JSON: [block:code] { "codes": [ { "code": " apiGatewayRequestTemplate: \n application/json: \n body: \"$input.json('$')\"\n pathParams: \"$input.params().path\"\n queryParams: \"$input.params().querystring\"", "language": "yaml", "name": "s-templates.yaml" } ] } [/block] You can add specify templates in your `s-project.json, s-resources-cf.json, and s-function.json` configuration files by enclosing them in `$${}`, like this: [block:code] { "codes": [ { "code": "\"requestTemplates\": \"$${apiGatewayRequestTemplate}\"", "language": "json", "name": "s-function.json" } ] } [/block] To define templates in your project, add them to `s-templates.json` files to the root of your project, subfolders, or function folders. ``` project |_ s-templates.json subfolder |_ s-templates.json function |_ s-templates.json ``` You can add them wherever you think is best. The reason you can add template files in multiple folders is because templates defined in parent folders can be extended by templates defined in subfolders. Let's put the examples above together. Say we have two template files that extend the template "apiGatewayRequestTemplate": [block:code] { "codes": [ { "code": "{\n \"apiGatewayRequestTemplate\": {\n \"application/json\": {\n \"body\": \"$input.json('$')\",\n \"pathParams\" : \"$input.params().path\",\n \"queryParams\" : \"$input.params().querystring\"\n }\n }\n}", "language": "json", "name": "project/s-templates.json" } ] } [/block] [block:code] { "codes": [ { "code": "{\n \"apiGatewayRequestTemplate\": {\n \"application/json\": {\n \"pathId\": \"$input.params('id')\"\n }\n }\n}", "language": "json", "name": "project/functions/function/s-templates.json" } ] } [/block] This template is used for an endpoint's [API Gateway Mapping Template](http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html). In a Serverless project, this information is specified in the `s-function.json` containing the endpoint, like this: [block:code] { "codes": [ { "code": "{\n \"name\": \"show\",\n \"handler\": \"multi/show/handler.handler\",\n \"runtime\": \"nodejs\",\n \"timeout\": 6,\n \"memorySize\": 256,\n \"custom\": {},\n \"endpoints\": [\n {\n \"path\": \"users/show/{id}\",\n \"method\": \"GET\",\n \"authorizationType\": \"none\",\n \"apiKeyRequired\": false,\n \"requestParameters\": {},\n \"requestTemplates\": \"$${apiGatewayRequestTemplate}\",\n \"responses\": {}\n }\n ]\n}", "language": "json", "name": "project/functions/function/s-function.json" } ] } [/block] The `apiGatewayRequestTemplate` template in the endpoint syntax above would be populated with the combination of both of the above templates. Also, you can extend a template in a parent folder by templates in multiple subfolders at the same time. Each subfolders will get an aggregated template that contains unique values, depending on what values they extended the parent template with. Lastly, you can put templates in your `s-resources-cf.json` file as well. Project Variables (described below) can also be included in templates. The Framework first populates all templates, then populates all variables. [block:callout] { "type": "info", "title": "Templates For Function Names", "body": "Create custom Function Name Templates by putting a `\"functionName\": \"${project}-${stage}-${name}\"` key in `s-templates.json` in the root of your project. `${name}` is a reserved Project Variable and is always populated with the name property in the current configuration file. Then in `s-function.json`, put this: `\"customName\": \"$${functionName}\"`" } [/block] ## Variables Variables hold strings or integers. They enable you to add dynamic values to your configuration files that change with each project stage and region. This is great for AWS Account specific data that changes across your stages and regions, like ARNs. All of your variables are defined inside the `_meta/variables` folder (the `_meta` folder is gitgnored by default). Inside this variables folder you'll find a JSON file for each stage and each region for that stage. Here's an example of defining a new variable in the `us-east-1` region in the `dev` stage. If you open the `s-variables-dev-useast1.json` file, you'll find the following default variables which are used by our framework: [block:code] { "codes": [ { "code": "{\n \"region\": \"us-east-1\",\n \"resourcesStackName\": \"projectName-dev-r\",\n \"iamRoleArnLambda\": \"arn:aws:iam::AWSaccount:role/projectName-dev-r-IamRoleLambda-someRole\"\n}", "language": "json", "name": "s-variables-dev-useast1.json" } ] } [/block] As you can see, these values are specific to your project and your AWS account. You can now reference this variable in any of your project's configuration files by enclosing it in `${}` (just like with templates, but with a single $ sign). For example, here is a variable being used in the CloudFormation syntax in your `s-resources-cf.json` file: [block:code] { "codes": [ { "code": "\"IamPolicyLambda\": {\n \"Type\": \"AWS::IAM::Policy\",\n \"Properties\": {\n \"PolicyName\": \"${region}-lambda\",\n ...\n }\n}", "language": "json", "name": "s-resources-cf.json" } ] } [/block] You can use also use variables in `s-template.json` files. The framework first populates all templates, then populates all variables.
Serverless Projects use configuration files written in JSON which can get big, and they sometimes need to include dynamic values that change for stages and regions (ie. ARNs). To reduce redundancy in the config files, we created Project Templates. To allow for dynamic values in the config files, we created Project Variables. [block:callout] { "type": "warning", "title": "Templates & Variables are for Configuration Only", "body": "Templates and variables are used for configuration of the project only. This information is not usable in your lambda functions. To set variables which can be used by your lambda functions, use environment variables." } [/block] ## Templates Templates are variables containing objects, arrays or strings. These dramatically reduce redundancy in your configuration files, since you can use the same template in multiple places. Here is an example of a template that can be used as [API Gateway Mapping Template](http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html) [block:code] { "codes": [ { "code": "{\n \"apiGatewayRequestTemplate\": {\n \"application/json\": {\n \"body\": \"$input.json('$')\",\n \"pathParams\" : \"$input.params().path\",\n \"queryParams\" : \"$input.params().querystring\"\n }\n }\n}", "language": "json", "name": "s-templates.json" } ] } [/block] Or, it can be a YAML file instead of JSON: [block:code] { "codes": [ { "code": " apiGatewayRequestTemplate: \n application/json: \n body: \"$input.json('$')\"\n pathParams: \"$input.params().path\"\n queryParams: \"$input.params().querystring\"", "language": "yaml", "name": "s-templates.yaml" } ] } [/block] You can add specify templates in your `s-project.json, s-resources-cf.json, and s-function.json` configuration files by enclosing them in `$${}`, like this: [block:code] { "codes": [ { "code": "\"requestTemplates\": \"$${apiGatewayRequestTemplate}\"", "language": "json", "name": "s-function.json" } ] } [/block] To define templates in your project, add them to `s-templates.json` files to the root of your project, subfolders, or function folders. ``` project |_ s-templates.json subfolder |_ s-templates.json function |_ s-templates.json ``` You can add them wherever you think is best. The reason you can add template files in multiple folders is because templates defined in parent folders can be extended by templates defined in subfolders. Let's put the examples above together. Say we have two template files that extend the template "apiGatewayRequestTemplate": [block:code] { "codes": [ { "code": "{\n \"apiGatewayRequestTemplate\": {\n \"application/json\": {\n \"body\": \"$input.json('$')\",\n \"pathParams\" : \"$input.params().path\",\n \"queryParams\" : \"$input.params().querystring\"\n }\n }\n}", "language": "json", "name": "project/s-templates.json" } ] } [/block] [block:code] { "codes": [ { "code": "{\n \"apiGatewayRequestTemplate\": {\n \"application/json\": {\n \"pathId\": \"$input.params('id')\"\n }\n }\n}", "language": "json", "name": "project/functions/function/s-templates.json" } ] } [/block] This template is used for an endpoint's [API Gateway Mapping Template](http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html). In a Serverless project, this information is specified in the `s-function.json` containing the endpoint, like this: [block:code] { "codes": [ { "code": "{\n \"name\": \"show\",\n \"handler\": \"multi/show/handler.handler\",\n \"runtime\": \"nodejs\",\n \"timeout\": 6,\n \"memorySize\": 256,\n \"custom\": {},\n \"endpoints\": [\n {\n \"path\": \"users/show/{id}\",\n \"method\": \"GET\",\n \"authorizationType\": \"none\",\n \"apiKeyRequired\": false,\n \"requestParameters\": {},\n \"requestTemplates\": \"$${apiGatewayRequestTemplate}\",\n \"responses\": {}\n }\n ]\n}", "language": "json", "name": "project/functions/function/s-function.json" } ] } [/block] The `apiGatewayRequestTemplate` template in the endpoint syntax above would be populated with the combination of both of the above templates. Also, you can extend a template in a parent folder by templates in multiple subfolders at the same time. Each subfolders will get an aggregated template that contains unique values, depending on what values they extended the parent template with. Lastly, you can put templates in your `s-resources-cf.json` file as well. Project Variables (described below) can also be included in templates. The Framework first populates all templates, then populates all variables. [block:callout] { "type": "info", "title": "Templates For Function Names", "body": "Create custom Function Name Templates by putting a `\"functionName\": \"${project}-${stage}-${name}\"` key in `s-templates.json` in the root of your project. `${name}` is a reserved Project Variable and is always populated with the name property in the current configuration file. Then in `s-function.json`, put this: `\"customName\": \"$${functionName}\"`" } [/block] ## Variables Variables hold strings or integers. They enable you to add dynamic values to your configuration files that change with each project stage and region. This is great for AWS Account specific data that changes across your stages and regions, like ARNs. All of your variables are defined inside the `_meta/variables` folder (the `_meta` folder is gitgnored by default). Inside this variables folder you'll find a JSON file for each stage and each region for that stage. Here's an example of defining a new variable in the `us-east-1` region in the `dev` stage. If you open the `s-variables-dev-useast1.json` file, you'll find the following default variables which are used by our framework: [block:code] { "codes": [ { "code": "{\n \"region\": \"us-east-1\",\n \"resourcesStackName\": \"projectName-dev-r\",\n \"iamRoleArnLambda\": \"arn:aws:iam::AWSaccount:role/projectName-dev-r-IamRoleLambda-someRole\"\n}", "language": "json", "name": "s-variables-dev-useast1.json" } ] } [/block] As you can see, these values are specific to your project and your AWS account. You can now reference this variable in any of your project's configuration files by enclosing it in `${}` (just like with templates, but with a single $ sign). For example, here is a variable being used in the CloudFormation syntax in your `s-resources-cf.json` file: [block:code] { "codes": [ { "code": "\"IamPolicyLambda\": {\n \"Type\": \"AWS::IAM::Policy\",\n \"Properties\": {\n \"PolicyName\": \"${region}-lambda\",\n ...\n }\n}", "language": "json", "name": "s-resources-cf.json" } ] } [/block] You can use also use variables in `s-template.json` files. The framework first populates all templates, then populates all variables.
{"__v":29,"_id":"56dac0493dede50b00eacb75","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"[block:callout]\n{\n  \"type\": \"warning\",\n  \"body\": \"The `s-function.json` file is just a reflection of AWS configurations. If some of the Function and Endpoint configurations feel alien to you, you need to familiarize yourself with [AWS Lambda docs (including event sources)](http://docs.aws.amazon.com/lambda/latest/dg/welcome.html) and [API Gateway docs](https://aws.amazon.com/api-gateway/).\",\n  \"title\": \"AWS Docs\"\n}\n[/block]\nServerless Functions is the core of your project. All function configurations are in the `s-function.json` file, and it's the most complex Serverless JSON file. This file contains configuration for your Function, its Endpoints and Event Sources. Here's an example `s-function.json`:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"name\\\": \\\"function1\\\",\\n  \\\"description\\\": \\\"My first lambda in project ${project} on stage ${stage}\\\",\\n  \\\"customName\\\": false,\\n  \\\"customRole\\\": false,\\n  \\\"handler\\\": \\\"function1/handler.handler\\\",\\n  \\\"timeout\\\": 6,\\n  \\\"memorySize\\\": 1024,\\n  \\\"custom\\\": {\\n    \\\"excludePatterns\\\": []\\n  },\\n  \\\"environment\\\": {\\n    \\\"SOME_ENV_VAR\\\": \\\"${envVarValue}\\\"\\n  },\\n  \\\"endpoints\\\": [\\n    {\\n      \\\"path\\\": \\\"fun\\\",\\n      \\\"method\\\": \\\"GET\\\",\\n      \\\"type\\\": \\\"AWS\\\",\\n      \\\"authorizationType\\\": \\\"none\\\",\\n      \\\"authorizationFunction\\\": \\\"\\\",\\n      \\\"apiKeyRequired\\\": false,\\n      \\\"requestParameters\\\": {},\\n      \\\"requestTemplates\\\": {\\n        \\\"application/json\\\": \\\"\\\"\\n      },\\n      \\\"responses\\\": {\\n        \\\"400\\\": {\\n          \\\"statusCode\\\": \\\"400\\\"\\n        },\\n        \\\"default\\\": {\\n          \\\"statusCode\\\": \\\"200\\\",\\n          \\\"responseParameters\\\": {},\\n          \\\"responseModels\\\": {},\\n          \\\"responseTemplates\\\": {\\n            \\\"application/json\\\": \\\"\\\"\\n          }\\n        }\\n      }\\n    }\\n  ],\\n  \\\"events\\\": [\\n    {\\n      \\\"name\\\" : \\\"myEventSource\\\",\\n      \\\"type\\\": \\\"schedule\\\",\\n      \\\"config\\\": {\\n         \\\"schedule\\\": \\\"rate(5 minutes)\\\"\\n      }\\n    }\\n],\\n\\\"vpc\\\": {\\n    \\\"securityGroupIds\\\": [],\\n    \\\"subnetIds\\\": []\\n  }\\n}\",\n      \"language\": \"json\",\n      \"name\": \"s-function.json\"\n    }\n  ]\n}\n[/block]\nThe `s-function.json` file properties can be divided into three categories: **Function**, **Endpoint** and **Event** configurations.\n\n## Function Configurations\n* **name**: the name of this function. This matches the name of the function folder. **The name of your function must be unique project wide**. Because this is how we identify the function.\n* **description**: the description of the lambda function that will be visible in the AWS lambda console. Defaults to 'Serverless Lambda function for project ...' if not set.\n* **customName**: the name of the lambda function. It's set to `false` by default. This means that you don't want to set a custom name for your lambda, and instead we'll name the lambda for you using a combination of project and function names.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"name vs. customName\",\n  \"body\": \"The `name` property is the function name within the context of your Serverless Project, while the `customName` property is the **actual lambda name that is deployed to AWS**. If you don't set a `customName`, we'll generate a lambda name for you that is just a combination of the project and function names.\"\n}\n[/block]\n* **customRole**: you can set a custom IAM role ARN in this property to override the default project IAM role. It's set to false by default.\n* **handler**: The path to the lambda `handler.js` file relative to the root of the directory you want deployed along with your function.\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"The Handler Property Is Very Flexible\",\n  \"body\": \"The handler property gives you the ability to share code between your functions. By default the handler property is `handler.handler`, that means it's only relative to the function folder, so only the function folder will be deployed to Lambda.\\n\\nIf however you want to include the parent subfolder of a function, you should change the handler to be like this: `functionName/handler.handler`. As you can see, the path to the handler now includes the function folder, which means that the path is now relative to the parent subfolder, so in that case the parent subfolder will be deployed along with your function. So if you have a `lib` folder in that parent subfolder that is required by your function, it'll be deployed with your function.\\n\\nThis also gives you the ability to handle npm dependencies however you like. If you have a `package.json` and `node_modules` in that parent subfolder, it'll be included in the deployed lambda. So the more parent folders you include in the handler path, the higher you go in the file tree.\"\n}\n[/block]\n* **timeout**: the maximum running time allowed for your lambda function. Default is 6 seconds. Maximum allowed timeout by AWS is 300 seconds.\n* **memorySize**: the memory size of your lambda function. Default is 1024MB, but you can set it up to 1.5GB. This is a limit set by AWS lambda.\n* **vpc**: an object containing configurations for using the function to access resources in AWS Virtual Private Cloud. This object contains two keys: `securityGroupIds` and `subnetIds`. The value of both of these keys are an array that contains the relevant security group and subnet Ids. For a detailed guide on using VPC with Lambda, checkout [these notes](https://github.com/serverless/serverless/issues/629#issuecomment-184472421) shared by our awesome contributor [Stacey Moore](https://github.com/staceymoore).\n* **environment**: this is where you define environment variables needed for this function. Each env var is a key in this `environment` object, and the env var value is the value of this key. You can make use of Serverless variables in the `_meta` folder to reference sensitive information.\n* **custom**: this property allows plugin authors to add more function configurations for their plugins. It has the following default property: \n    * **excludePatterns**: this is where you set whatever you want to exclude during function deployment.\n\n## Endpoint Configurations \nEach function (`s-function.json`) can have multiple endpoints. They are all defined in the `endpoints` property. It's an  array that contains multiple Endpoint objects. Each of which have the following configurations:\n\n* **path**: The path of the endpoint. This value gets added to the stage and AWS host to construct the full endpoint url (i.e. `https://fpq68h492f.execute-api.us-east-1.amazonaws.com/development/group1/endpoint-path1`)\n* **method**: The HTTP method for this endpoint. Set to `GET` by default.\n* **authorizationType**: The type of authorization for your endpoint. Default is `none`.  If you want to use AWS IAM tokens, then specify `AWS_IAM`.  If you want to use a custom authorizer function, specify `CUSTOM` and don't forget to set the *authorizerFunction* key below. \n* **authorizerFunction**: The name of the function (from `s-function.json`) that you will be using as the custom auth function.  Only used if *authorizationType* is set to `CUSTOM`.\n* **apiKeyRequired**: Whether or not an API Key is required for your endpoint. Default is `false`.\n* **requestParameters**: Sets the AWS request parameters for your endpoint. Request parameters are represented as a key/value map, the key must match the pattern of _integration.request.**{location}**.**{integrationName}**_, and the value must match the pattern of _method.request.**{location}**.**{apiName}**_.  _**location**_ is either _querystring_, _path_, or _header_. _**integrationName**_ indicates the name that will be used to send this value to the backend lambda. _**apiName**_ is the name that callers of you Api will use.  For example, to set this to pass an Authorization header from the original api call to the backend Lambda, add the following key/value: `\"integration.request.header.Authorization\": \"method.request.header.Authorization\"`\n* **requestTemplates**: Sets the AWS request templates for your endpoint.\n* **responses**: Sets the AWS responses parameters and templates. By default there are two response objects in your `s-function.json`, \"400\" and \"default\". \n      * **selectionPattern**  The `selectionPattern` property of these objects maps to the Lambda Error Regex in the Integration Response.\n\n## Event Sources Configurations \nEach function (`s-function.json`) can have multiple event sources. They are all defined in the `events` property. It's an array that contains multiple Event objects. Each of which have the following configurations:\n\n* **name**: a name for your event source that is unique within your function. We use this name to reference the event source and construct the event source `sPath`.\n* **type**: the type of your event source. We currently support 5 event sources: `dynamodbstream`, `kinesisstream`, `s3`, `sns`, and `schedule`.\n* **config**: configuration object for your event source. The properties of this object depends on the event source `type`. Below are examples for each event source types and their configurations.\n\n### Event Sources Examples\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"name\\\" : \\\"myDynamoDbTable\\\",\\n\\\"type\\\": \\\"dynamodbstream\\\",\\n\\\"config\\\": {\\n  \\\"streamArn\\\": \\\"${streamArnVariable}\\\", // required!\\n  \\\"startingPosition\\\": \\\"LATEST\\\", // default is \\\"TRIM_HORIZON\\\" if not provided\\n  \\\"batchSize\\\": 50, // default is 100 if not provided\\n  \\\"enabled\\\": false // default is true if not provided\\n}\",\n      \"language\": \"json\",\n      \"name\": \"s-function.json - DynamoDB Stream Event Source\"\n    }\n  ]\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"name\\\" : \\\"myKinesisStream\\\",\\n\\\"type\\\": \\\"kinesisstream\\\",\\n\\\"config\\\": {\\n  \\\"streamArn\\\": \\\"${streamArnVariable}\\\", // required!\\n  \\\"startingPosition\\\": \\\"LATEST\\\", // default is \\\"TRIM_HORIZON\\\" if not provided\\n  \\\"batchSize\\\": 50, // default is 100 if not provided\\n  \\\"enabled\\\": false // default is true if not provided \\n}\",\n      \"language\": \"json\",\n      \"name\": \"s-function.json - Kinesis Stream Event Source\"\n    }\n  ]\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"name\\\" : \\\"myS3Event\\\",\\n\\\"type\\\": \\\"s3\\\",\\n\\\"config\\\": {\\n  \\\"bucket\\\": \\\"${testEventBucket}\\\", // required! - bucket name\\n  \\\"bucketEvents\\\": [\\\"s3:ObjectCreated:*\\\"], // required! - an array of events that should trigger the lambda\\n  \\\"filterRules\\\" : [\\n      {\\n          \\\"name\\\" : \\\"prefix | suffix\\\",\\n          \\\"value\\\" : \\\"STRING_VALUE\\\"\\n      }\\n   ] // optional, specify prefix or suffix for S3 event source\\n}\",\n      \"language\": \"json\",\n      \"name\": \"s-function.json - S3 Event Source\"\n    }\n  ]\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"name\\\" : \\\"mySNSEvent\\\",\\n\\\"type\\\": \\\"sns\\\",\\n\\\"config\\\": {\\n  \\\"topicName\\\": \\\"test-event-source\\\" // required! - the topic name you want your lambda to subscribe to\\n}\",\n      \"language\": \"json\",\n      \"name\": \"s-function.json - SNS Event Source\"\n    }\n  ]\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"name\\\" : \\\"mySchedule\\\",\\n\\\"type\\\": \\\"schedule\\\",\\n\\\"config\\\": {\\n    \\\"schedule\\\": \\\"rate(5 minutes)\\\", // required! - could also take a cron expression: \\\"cron(0 20 * * ? *)\\\"\\n    \\\"enabled\\\": true // default is false if not provided\\n}\",\n      \"language\": \"json\",\n      \"name\": \"s-function.json - Schedule Event Source\"\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Real World Examples\",\n  \"body\": \"If you'd like to see a real world working example of `s-function.json` with event sources, [check out our official unit test function](https://github.com/serverless/serverless/blob/master/tests/test-prj/functions/function0/s-function.json).\"\n}\n[/block]","category":"56dac0483dede50b00eacb52","createdAt":"2016-02-11T07:46:21.909Z","excerpt":"All you need to know about configuring your functions, endpoints and event sources.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":3,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"function-configuration","sync_unique":"","title":"Function Configuration","type":"basic","updates":["56be017dba9df50d00a7a1bc","56bfba3bff5b440d0053edb4","56bfffab8e032e170076dd2a","56c0f27cd344e517000207ff","56eed1b8aff1620e00a32327","56f2d93992cce10e00eaefe2","56f42301345cc52000fcfcdc","56f4255b54c3ff2000533fd6","56fc497e896b2c0e00715eeb","56fc49ddb766ad0e00a37d67","5703db0ab96a810e009a0b97","570bf92fce91c70e00774c56","5717981cfdcb310e00f24187","572e3ab38285700e00c93acf","572e3e3aa2415f0e00bcbc5c","573152cc4245100e00174429","5731f6c0a825f30e00b51ae8","573dd69f8cf1492400bba6ea","5761cb2da7c9f729009a75f3","57903e04c7e51a340045aeff"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Function Configuration

All you need to know about configuring your functions, endpoints and event sources.

[block:callout] { "type": "warning", "body": "The `s-function.json` file is just a reflection of AWS configurations. If some of the Function and Endpoint configurations feel alien to you, you need to familiarize yourself with [AWS Lambda docs (including event sources)](http://docs.aws.amazon.com/lambda/latest/dg/welcome.html) and [API Gateway docs](https://aws.amazon.com/api-gateway/).", "title": "AWS Docs" } [/block] Serverless Functions is the core of your project. All function configurations are in the `s-function.json` file, and it's the most complex Serverless JSON file. This file contains configuration for your Function, its Endpoints and Event Sources. Here's an example `s-function.json`: [block:code] { "codes": [ { "code": "{\n \"name\": \"function1\",\n \"description\": \"My first lambda in project ${project} on stage ${stage}\",\n \"customName\": false,\n \"customRole\": false,\n \"handler\": \"function1/handler.handler\",\n \"timeout\": 6,\n \"memorySize\": 1024,\n \"custom\": {\n \"excludePatterns\": []\n },\n \"environment\": {\n \"SOME_ENV_VAR\": \"${envVarValue}\"\n },\n \"endpoints\": [\n {\n \"path\": \"fun\",\n \"method\": \"GET\",\n \"type\": \"AWS\",\n \"authorizationType\": \"none\",\n \"authorizationFunction\": \"\",\n \"apiKeyRequired\": false,\n \"requestParameters\": {},\n \"requestTemplates\": {\n \"application/json\": \"\"\n },\n \"responses\": {\n \"400\": {\n \"statusCode\": \"400\"\n },\n \"default\": {\n \"statusCode\": \"200\",\n \"responseParameters\": {},\n \"responseModels\": {},\n \"responseTemplates\": {\n \"application/json\": \"\"\n }\n }\n }\n }\n ],\n \"events\": [\n {\n \"name\" : \"myEventSource\",\n \"type\": \"schedule\",\n \"config\": {\n \"schedule\": \"rate(5 minutes)\"\n }\n }\n],\n\"vpc\": {\n \"securityGroupIds\": [],\n \"subnetIds\": []\n }\n}", "language": "json", "name": "s-function.json" } ] } [/block] The `s-function.json` file properties can be divided into three categories: **Function**, **Endpoint** and **Event** configurations. ## Function Configurations * **name**: the name of this function. This matches the name of the function folder. **The name of your function must be unique project wide**. Because this is how we identify the function. * **description**: the description of the lambda function that will be visible in the AWS lambda console. Defaults to 'Serverless Lambda function for project ...' if not set. * **customName**: the name of the lambda function. It's set to `false` by default. This means that you don't want to set a custom name for your lambda, and instead we'll name the lambda for you using a combination of project and function names. [block:callout] { "type": "warning", "title": "name vs. customName", "body": "The `name` property is the function name within the context of your Serverless Project, while the `customName` property is the **actual lambda name that is deployed to AWS**. If you don't set a `customName`, we'll generate a lambda name for you that is just a combination of the project and function names." } [/block] * **customRole**: you can set a custom IAM role ARN in this property to override the default project IAM role. It's set to false by default. * **handler**: The path to the lambda `handler.js` file relative to the root of the directory you want deployed along with your function. [block:callout] { "type": "info", "title": "The Handler Property Is Very Flexible", "body": "The handler property gives you the ability to share code between your functions. By default the handler property is `handler.handler`, that means it's only relative to the function folder, so only the function folder will be deployed to Lambda.\n\nIf however you want to include the parent subfolder of a function, you should change the handler to be like this: `functionName/handler.handler`. As you can see, the path to the handler now includes the function folder, which means that the path is now relative to the parent subfolder, so in that case the parent subfolder will be deployed along with your function. So if you have a `lib` folder in that parent subfolder that is required by your function, it'll be deployed with your function.\n\nThis also gives you the ability to handle npm dependencies however you like. If you have a `package.json` and `node_modules` in that parent subfolder, it'll be included in the deployed lambda. So the more parent folders you include in the handler path, the higher you go in the file tree." } [/block] * **timeout**: the maximum running time allowed for your lambda function. Default is 6 seconds. Maximum allowed timeout by AWS is 300 seconds. * **memorySize**: the memory size of your lambda function. Default is 1024MB, but you can set it up to 1.5GB. This is a limit set by AWS lambda. * **vpc**: an object containing configurations for using the function to access resources in AWS Virtual Private Cloud. This object contains two keys: `securityGroupIds` and `subnetIds`. The value of both of these keys are an array that contains the relevant security group and subnet Ids. For a detailed guide on using VPC with Lambda, checkout [these notes](https://github.com/serverless/serverless/issues/629#issuecomment-184472421) shared by our awesome contributor [Stacey Moore](https://github.com/staceymoore). * **environment**: this is where you define environment variables needed for this function. Each env var is a key in this `environment` object, and the env var value is the value of this key. You can make use of Serverless variables in the `_meta` folder to reference sensitive information. * **custom**: this property allows plugin authors to add more function configurations for their plugins. It has the following default property: * **excludePatterns**: this is where you set whatever you want to exclude during function deployment. ## Endpoint Configurations Each function (`s-function.json`) can have multiple endpoints. They are all defined in the `endpoints` property. It's an array that contains multiple Endpoint objects. Each of which have the following configurations: * **path**: The path of the endpoint. This value gets added to the stage and AWS host to construct the full endpoint url (i.e. `https://fpq68h492f.execute-api.us-east-1.amazonaws.com/development/group1/endpoint-path1`) * **method**: The HTTP method for this endpoint. Set to `GET` by default. * **authorizationType**: The type of authorization for your endpoint. Default is `none`. If you want to use AWS IAM tokens, then specify `AWS_IAM`. If you want to use a custom authorizer function, specify `CUSTOM` and don't forget to set the *authorizerFunction* key below. * **authorizerFunction**: The name of the function (from `s-function.json`) that you will be using as the custom auth function. Only used if *authorizationType* is set to `CUSTOM`. * **apiKeyRequired**: Whether or not an API Key is required for your endpoint. Default is `false`. * **requestParameters**: Sets the AWS request parameters for your endpoint. Request parameters are represented as a key/value map, the key must match the pattern of _integration.request.**{location}**.**{integrationName}**_, and the value must match the pattern of _method.request.**{location}**.**{apiName}**_. _**location**_ is either _querystring_, _path_, or _header_. _**integrationName**_ indicates the name that will be used to send this value to the backend lambda. _**apiName**_ is the name that callers of you Api will use. For example, to set this to pass an Authorization header from the original api call to the backend Lambda, add the following key/value: `"integration.request.header.Authorization": "method.request.header.Authorization"` * **requestTemplates**: Sets the AWS request templates for your endpoint. * **responses**: Sets the AWS responses parameters and templates. By default there are two response objects in your `s-function.json`, "400" and "default". * **selectionPattern** The `selectionPattern` property of these objects maps to the Lambda Error Regex in the Integration Response. ## Event Sources Configurations Each function (`s-function.json`) can have multiple event sources. They are all defined in the `events` property. It's an array that contains multiple Event objects. Each of which have the following configurations: * **name**: a name for your event source that is unique within your function. We use this name to reference the event source and construct the event source `sPath`. * **type**: the type of your event source. We currently support 5 event sources: `dynamodbstream`, `kinesisstream`, `s3`, `sns`, and `schedule`. * **config**: configuration object for your event source. The properties of this object depends on the event source `type`. Below are examples for each event source types and their configurations. ### Event Sources Examples [block:code] { "codes": [ { "code": "\"name\" : \"myDynamoDbTable\",\n\"type\": \"dynamodbstream\",\n\"config\": {\n \"streamArn\": \"${streamArnVariable}\", // required!\n \"startingPosition\": \"LATEST\", // default is \"TRIM_HORIZON\" if not provided\n \"batchSize\": 50, // default is 100 if not provided\n \"enabled\": false // default is true if not provided\n}", "language": "json", "name": "s-function.json - DynamoDB Stream Event Source" } ] } [/block] [block:code] { "codes": [ { "code": "\"name\" : \"myKinesisStream\",\n\"type\": \"kinesisstream\",\n\"config\": {\n \"streamArn\": \"${streamArnVariable}\", // required!\n \"startingPosition\": \"LATEST\", // default is \"TRIM_HORIZON\" if not provided\n \"batchSize\": 50, // default is 100 if not provided\n \"enabled\": false // default is true if not provided \n}", "language": "json", "name": "s-function.json - Kinesis Stream Event Source" } ] } [/block] [block:code] { "codes": [ { "code": "\"name\" : \"myS3Event\",\n\"type\": \"s3\",\n\"config\": {\n \"bucket\": \"${testEventBucket}\", // required! - bucket name\n \"bucketEvents\": [\"s3:ObjectCreated:*\"], // required! - an array of events that should trigger the lambda\n \"filterRules\" : [\n {\n \"name\" : \"prefix | suffix\",\n \"value\" : \"STRING_VALUE\"\n }\n ] // optional, specify prefix or suffix for S3 event source\n}", "language": "json", "name": "s-function.json - S3 Event Source" } ] } [/block] [block:code] { "codes": [ { "code": "\"name\" : \"mySNSEvent\",\n\"type\": \"sns\",\n\"config\": {\n \"topicName\": \"test-event-source\" // required! - the topic name you want your lambda to subscribe to\n}", "language": "json", "name": "s-function.json - SNS Event Source" } ] } [/block] [block:code] { "codes": [ { "code": "\"name\" : \"mySchedule\",\n\"type\": \"schedule\",\n\"config\": {\n \"schedule\": \"rate(5 minutes)\", // required! - could also take a cron expression: \"cron(0 20 * * ? *)\"\n \"enabled\": true // default is false if not provided\n}", "language": "json", "name": "s-function.json - Schedule Event Source" } ] } [/block] [block:callout] { "type": "info", "title": "Real World Examples", "body": "If you'd like to see a real world working example of `s-function.json` with event sources, [check out our official unit test function](https://github.com/serverless/serverless/blob/master/tests/test-prj/functions/function0/s-function.json)." } [/block]
[block:callout] { "type": "warning", "body": "The `s-function.json` file is just a reflection of AWS configurations. If some of the Function and Endpoint configurations feel alien to you, you need to familiarize yourself with [AWS Lambda docs (including event sources)](http://docs.aws.amazon.com/lambda/latest/dg/welcome.html) and [API Gateway docs](https://aws.amazon.com/api-gateway/).", "title": "AWS Docs" } [/block] Serverless Functions is the core of your project. All function configurations are in the `s-function.json` file, and it's the most complex Serverless JSON file. This file contains configuration for your Function, its Endpoints and Event Sources. Here's an example `s-function.json`: [block:code] { "codes": [ { "code": "{\n \"name\": \"function1\",\n \"description\": \"My first lambda in project ${project} on stage ${stage}\",\n \"customName\": false,\n \"customRole\": false,\n \"handler\": \"function1/handler.handler\",\n \"timeout\": 6,\n \"memorySize\": 1024,\n \"custom\": {\n \"excludePatterns\": []\n },\n \"environment\": {\n \"SOME_ENV_VAR\": \"${envVarValue}\"\n },\n \"endpoints\": [\n {\n \"path\": \"fun\",\n \"method\": \"GET\",\n \"type\": \"AWS\",\n \"authorizationType\": \"none\",\n \"authorizationFunction\": \"\",\n \"apiKeyRequired\": false,\n \"requestParameters\": {},\n \"requestTemplates\": {\n \"application/json\": \"\"\n },\n \"responses\": {\n \"400\": {\n \"statusCode\": \"400\"\n },\n \"default\": {\n \"statusCode\": \"200\",\n \"responseParameters\": {},\n \"responseModels\": {},\n \"responseTemplates\": {\n \"application/json\": \"\"\n }\n }\n }\n }\n ],\n \"events\": [\n {\n \"name\" : \"myEventSource\",\n \"type\": \"schedule\",\n \"config\": {\n \"schedule\": \"rate(5 minutes)\"\n }\n }\n],\n\"vpc\": {\n \"securityGroupIds\": [],\n \"subnetIds\": []\n }\n}", "language": "json", "name": "s-function.json" } ] } [/block] The `s-function.json` file properties can be divided into three categories: **Function**, **Endpoint** and **Event** configurations. ## Function Configurations * **name**: the name of this function. This matches the name of the function folder. **The name of your function must be unique project wide**. Because this is how we identify the function. * **description**: the description of the lambda function that will be visible in the AWS lambda console. Defaults to 'Serverless Lambda function for project ...' if not set. * **customName**: the name of the lambda function. It's set to `false` by default. This means that you don't want to set a custom name for your lambda, and instead we'll name the lambda for you using a combination of project and function names. [block:callout] { "type": "warning", "title": "name vs. customName", "body": "The `name` property is the function name within the context of your Serverless Project, while the `customName` property is the **actual lambda name that is deployed to AWS**. If you don't set a `customName`, we'll generate a lambda name for you that is just a combination of the project and function names." } [/block] * **customRole**: you can set a custom IAM role ARN in this property to override the default project IAM role. It's set to false by default. * **handler**: The path to the lambda `handler.js` file relative to the root of the directory you want deployed along with your function. [block:callout] { "type": "info", "title": "The Handler Property Is Very Flexible", "body": "The handler property gives you the ability to share code between your functions. By default the handler property is `handler.handler`, that means it's only relative to the function folder, so only the function folder will be deployed to Lambda.\n\nIf however you want to include the parent subfolder of a function, you should change the handler to be like this: `functionName/handler.handler`. As you can see, the path to the handler now includes the function folder, which means that the path is now relative to the parent subfolder, so in that case the parent subfolder will be deployed along with your function. So if you have a `lib` folder in that parent subfolder that is required by your function, it'll be deployed with your function.\n\nThis also gives you the ability to handle npm dependencies however you like. If you have a `package.json` and `node_modules` in that parent subfolder, it'll be included in the deployed lambda. So the more parent folders you include in the handler path, the higher you go in the file tree." } [/block] * **timeout**: the maximum running time allowed for your lambda function. Default is 6 seconds. Maximum allowed timeout by AWS is 300 seconds. * **memorySize**: the memory size of your lambda function. Default is 1024MB, but you can set it up to 1.5GB. This is a limit set by AWS lambda. * **vpc**: an object containing configurations for using the function to access resources in AWS Virtual Private Cloud. This object contains two keys: `securityGroupIds` and `subnetIds`. The value of both of these keys are an array that contains the relevant security group and subnet Ids. For a detailed guide on using VPC with Lambda, checkout [these notes](https://github.com/serverless/serverless/issues/629#issuecomment-184472421) shared by our awesome contributor [Stacey Moore](https://github.com/staceymoore). * **environment**: this is where you define environment variables needed for this function. Each env var is a key in this `environment` object, and the env var value is the value of this key. You can make use of Serverless variables in the `_meta` folder to reference sensitive information. * **custom**: this property allows plugin authors to add more function configurations for their plugins. It has the following default property: * **excludePatterns**: this is where you set whatever you want to exclude during function deployment. ## Endpoint Configurations Each function (`s-function.json`) can have multiple endpoints. They are all defined in the `endpoints` property. It's an array that contains multiple Endpoint objects. Each of which have the following configurations: * **path**: The path of the endpoint. This value gets added to the stage and AWS host to construct the full endpoint url (i.e. `https://fpq68h492f.execute-api.us-east-1.amazonaws.com/development/group1/endpoint-path1`) * **method**: The HTTP method for this endpoint. Set to `GET` by default. * **authorizationType**: The type of authorization for your endpoint. Default is `none`. If you want to use AWS IAM tokens, then specify `AWS_IAM`. If you want to use a custom authorizer function, specify `CUSTOM` and don't forget to set the *authorizerFunction* key below. * **authorizerFunction**: The name of the function (from `s-function.json`) that you will be using as the custom auth function. Only used if *authorizationType* is set to `CUSTOM`. * **apiKeyRequired**: Whether or not an API Key is required for your endpoint. Default is `false`. * **requestParameters**: Sets the AWS request parameters for your endpoint. Request parameters are represented as a key/value map, the key must match the pattern of _integration.request.**{location}**.**{integrationName}**_, and the value must match the pattern of _method.request.**{location}**.**{apiName}**_. _**location**_ is either _querystring_, _path_, or _header_. _**integrationName**_ indicates the name that will be used to send this value to the backend lambda. _**apiName**_ is the name that callers of you Api will use. For example, to set this to pass an Authorization header from the original api call to the backend Lambda, add the following key/value: `"integration.request.header.Authorization": "method.request.header.Authorization"` * **requestTemplates**: Sets the AWS request templates for your endpoint. * **responses**: Sets the AWS responses parameters and templates. By default there are two response objects in your `s-function.json`, "400" and "default". * **selectionPattern** The `selectionPattern` property of these objects maps to the Lambda Error Regex in the Integration Response. ## Event Sources Configurations Each function (`s-function.json`) can have multiple event sources. They are all defined in the `events` property. It's an array that contains multiple Event objects. Each of which have the following configurations: * **name**: a name for your event source that is unique within your function. We use this name to reference the event source and construct the event source `sPath`. * **type**: the type of your event source. We currently support 5 event sources: `dynamodbstream`, `kinesisstream`, `s3`, `sns`, and `schedule`. * **config**: configuration object for your event source. The properties of this object depends on the event source `type`. Below are examples for each event source types and their configurations. ### Event Sources Examples [block:code] { "codes": [ { "code": "\"name\" : \"myDynamoDbTable\",\n\"type\": \"dynamodbstream\",\n\"config\": {\n \"streamArn\": \"${streamArnVariable}\", // required!\n \"startingPosition\": \"LATEST\", // default is \"TRIM_HORIZON\" if not provided\n \"batchSize\": 50, // default is 100 if not provided\n \"enabled\": false // default is true if not provided\n}", "language": "json", "name": "s-function.json - DynamoDB Stream Event Source" } ] } [/block] [block:code] { "codes": [ { "code": "\"name\" : \"myKinesisStream\",\n\"type\": \"kinesisstream\",\n\"config\": {\n \"streamArn\": \"${streamArnVariable}\", // required!\n \"startingPosition\": \"LATEST\", // default is \"TRIM_HORIZON\" if not provided\n \"batchSize\": 50, // default is 100 if not provided\n \"enabled\": false // default is true if not provided \n}", "language": "json", "name": "s-function.json - Kinesis Stream Event Source" } ] } [/block] [block:code] { "codes": [ { "code": "\"name\" : \"myS3Event\",\n\"type\": \"s3\",\n\"config\": {\n \"bucket\": \"${testEventBucket}\", // required! - bucket name\n \"bucketEvents\": [\"s3:ObjectCreated:*\"], // required! - an array of events that should trigger the lambda\n \"filterRules\" : [\n {\n \"name\" : \"prefix | suffix\",\n \"value\" : \"STRING_VALUE\"\n }\n ] // optional, specify prefix or suffix for S3 event source\n}", "language": "json", "name": "s-function.json - S3 Event Source" } ] } [/block] [block:code] { "codes": [ { "code": "\"name\" : \"mySNSEvent\",\n\"type\": \"sns\",\n\"config\": {\n \"topicName\": \"test-event-source\" // required! - the topic name you want your lambda to subscribe to\n}", "language": "json", "name": "s-function.json - SNS Event Source" } ] } [/block] [block:code] { "codes": [ { "code": "\"name\" : \"mySchedule\",\n\"type\": \"schedule\",\n\"config\": {\n \"schedule\": \"rate(5 minutes)\", // required! - could also take a cron expression: \"cron(0 20 * * ? *)\"\n \"enabled\": true // default is false if not provided\n}", "language": "json", "name": "s-function.json - Schedule Event Source" } ] } [/block] [block:callout] { "type": "info", "title": "Real World Examples", "body": "If you'd like to see a real world working example of `s-function.json` with event sources, [check out our official unit test function](https://github.com/serverless/serverless/blob/master/tests/test-prj/functions/function0/s-function.json)." } [/block]
{"__v":6,"_id":"56dac0493dede50b00eacb76","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"Every Serverless Project uses resources from Amazon Web Services and divides these resources into three groups:\n\n* AWS Lambdas\n* AWS API Gateway REST API\n* AWS Other Resources (IAM Roles, DynamoDB tables, S3 Buckets, etc.)\n\nServerless Projects don't have environments (they live exclusively on AWS).  However, there is still need to separate and isolate the AWS resources a Project uses for development, testing and production purposes and Serverless does this through *Stages*.  Stages are similar to environments, except they exist merely to separate and isolate your Project's AWS resources.\n\nEach Serverless Project can have one or multiple Stages, and each Stage can have one or multiple Regions.  Regions are based off of [AWS Regions](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/), like `us-east-1`.  Some AWS resources  come with their own \"stage\" concepts and Serverless Project Stages are designed to integrate with those, wherever possible.  Here is how:\n\n**AWS Lambdas**\nYour Project will have one set of deployed Lambda Functions on AWS, which can be replicated across each Region your Project uses.  Every Lambda Function can have multiple *versions* and *aliases*.  When you deploy a Function in your Project to a Stage, it deploy a Lambda that will be immediately versioned and aliased under the name of that Stage.\n\n**AWS API Gateway REST API**\nIf your Functions have Endpoint data in their `s-function.json` files, a REST API on AWS API Gateway will automatically be created for your Project.  Projects can only have one REST API, which can be replicated across each Region your Project uses.  Every API Gateway REST API can have multiple stages.  When you deploy an Endpoint in your Project to a Project Stage, it builds the Endpoint on your API Gateway REST API and then creates a deployment in that API Gateway stage.\n\n**AWS Other Resources**\nYour Project's other AWS resources are the only AWS resources that have separate deployments for each Stage.  These separate Stage deployments can be replicated across each Region your Project uses as well.  \n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Creating Your Project Stages\"\n}\n[/block]\n\nWe recommend every Project have the following Stages.\n\n* dev\n* beta\n* prod\n\nIf you are working on a team with multiple developers, we recommend giving each developer working on the Project their own Stage.  In this case, your Project might have Stages like this:\n\n* dev\n* tom\n* jeff\n* beta\n* prod\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Deploying Your Functions\"\n}\n[/block]\nWhen you deploy your Function's Code (aka your AWS Lambda Function), it is given a `LATEST` version by default.  However, pointing your Event Source Maps or your REST API Endpoints to `LATEST` is highly discouraged.  This is because when multiple developers are working on the same Lambda function, they can trample each-other's work, if they have any resources that are pointing toward the `LATEST` version. \n\nTo avoid all of this, Serverless never uses `LATEST`.  Every time you create or update your Lambda Function, it is automatically versioned and aliased to the Project Stage you specify.  This avoids the \"trampling\" issue entirely.  Further, it allows Serverless to include Environment Variables and Stage-specific IAM Roles within the Lambda Function, which AWS does not yet have support for.\n\n**Here is what happens when you deploy your Function's Code:**\n\n  * Your function code is copied to a distribution directory\n  * Your regular handler file is replaced by one that Serverless adds titled `_serverless_handler`, which contains your Function's Environment Variables in-lined in the code.\n  * Your Lambda code is compressed and uploaded to AWS in the region you chose w/ `publish: true` set.  This publishes a new Lambda Function Version immediately.\n  * An update request is sent to your deployed Lambda Version to Alias it with the specified Stage.\n  * The IAM role specified in the s-project.json for the specified Stage is added to your Lambda on deployment.\n\n**Here is what happens when you deploy your Function's Endpoint:**\n\n  * If no Stage is set, it deploys to your Project's “dev” Stage by default\n  * The Endpoint is constructed on your REST API in the specified Region\n  * A new Deployment is created.\n  * After Deployment is created, the Stage variable is reset to the Stage name.\n\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/WKs3MX5YTiqfcJTE79Nc_serverless_deployment_flow.png\",\n        \"serverless_deployment_flow.png\",\n        \"3000\",\n        \"2289\",\n        \"#c8c8c8\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]","category":"56dac0483dede50b00eacb52","createdAt":"2015-12-02T13:57:47.101Z","excerpt":"The recommended workflow for Serverless Projects.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":4,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"workflow","sync_unique":"","title":"Workflow","type":"basic","updates":["56fc806d896b2c0e00715f47","5703dba9b96a810e009a0b99","574d41510db0870e0075387a","57ad3bdffaa7a10e00449533"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Workflow

The recommended workflow for Serverless Projects.

Every Serverless Project uses resources from Amazon Web Services and divides these resources into three groups: * AWS Lambdas * AWS API Gateway REST API * AWS Other Resources (IAM Roles, DynamoDB tables, S3 Buckets, etc.) Serverless Projects don't have environments (they live exclusively on AWS). However, there is still need to separate and isolate the AWS resources a Project uses for development, testing and production purposes and Serverless does this through *Stages*. Stages are similar to environments, except they exist merely to separate and isolate your Project's AWS resources. Each Serverless Project can have one or multiple Stages, and each Stage can have one or multiple Regions. Regions are based off of [AWS Regions](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/), like `us-east-1`. Some AWS resources come with their own "stage" concepts and Serverless Project Stages are designed to integrate with those, wherever possible. Here is how: **AWS Lambdas** Your Project will have one set of deployed Lambda Functions on AWS, which can be replicated across each Region your Project uses. Every Lambda Function can have multiple *versions* and *aliases*. When you deploy a Function in your Project to a Stage, it deploy a Lambda that will be immediately versioned and aliased under the name of that Stage. **AWS API Gateway REST API** If your Functions have Endpoint data in their `s-function.json` files, a REST API on AWS API Gateway will automatically be created for your Project. Projects can only have one REST API, which can be replicated across each Region your Project uses. Every API Gateway REST API can have multiple stages. When you deploy an Endpoint in your Project to a Project Stage, it builds the Endpoint on your API Gateway REST API and then creates a deployment in that API Gateway stage. **AWS Other Resources** Your Project's other AWS resources are the only AWS resources that have separate deployments for each Stage. These separate Stage deployments can be replicated across each Region your Project uses as well. [block:api-header] { "type": "basic", "title": "Creating Your Project Stages" } [/block] We recommend every Project have the following Stages. * dev * beta * prod If you are working on a team with multiple developers, we recommend giving each developer working on the Project their own Stage. In this case, your Project might have Stages like this: * dev * tom * jeff * beta * prod [block:api-header] { "type": "basic", "title": "Deploying Your Functions" } [/block] When you deploy your Function's Code (aka your AWS Lambda Function), it is given a `LATEST` version by default. However, pointing your Event Source Maps or your REST API Endpoints to `LATEST` is highly discouraged. This is because when multiple developers are working on the same Lambda function, they can trample each-other's work, if they have any resources that are pointing toward the `LATEST` version. To avoid all of this, Serverless never uses `LATEST`. Every time you create or update your Lambda Function, it is automatically versioned and aliased to the Project Stage you specify. This avoids the "trampling" issue entirely. Further, it allows Serverless to include Environment Variables and Stage-specific IAM Roles within the Lambda Function, which AWS does not yet have support for. **Here is what happens when you deploy your Function's Code:** * Your function code is copied to a distribution directory * Your regular handler file is replaced by one that Serverless adds titled `_serverless_handler`, which contains your Function's Environment Variables in-lined in the code. * Your Lambda code is compressed and uploaded to AWS in the region you chose w/ `publish: true` set. This publishes a new Lambda Function Version immediately. * An update request is sent to your deployed Lambda Version to Alias it with the specified Stage. * The IAM role specified in the s-project.json for the specified Stage is added to your Lambda on deployment. **Here is what happens when you deploy your Function's Endpoint:** * If no Stage is set, it deploys to your Project's “dev” Stage by default * The Endpoint is constructed on your REST API in the specified Region * A new Deployment is created. * After Deployment is created, the Stage variable is reset to the Stage name. [block:image] { "images": [ { "image": [ "https://files.readme.io/WKs3MX5YTiqfcJTE79Nc_serverless_deployment_flow.png", "serverless_deployment_flow.png", "3000", "2289", "#c8c8c8", "" ] } ] } [/block]
Every Serverless Project uses resources from Amazon Web Services and divides these resources into three groups: * AWS Lambdas * AWS API Gateway REST API * AWS Other Resources (IAM Roles, DynamoDB tables, S3 Buckets, etc.) Serverless Projects don't have environments (they live exclusively on AWS). However, there is still need to separate and isolate the AWS resources a Project uses for development, testing and production purposes and Serverless does this through *Stages*. Stages are similar to environments, except they exist merely to separate and isolate your Project's AWS resources. Each Serverless Project can have one or multiple Stages, and each Stage can have one or multiple Regions. Regions are based off of [AWS Regions](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/), like `us-east-1`. Some AWS resources come with their own "stage" concepts and Serverless Project Stages are designed to integrate with those, wherever possible. Here is how: **AWS Lambdas** Your Project will have one set of deployed Lambda Functions on AWS, which can be replicated across each Region your Project uses. Every Lambda Function can have multiple *versions* and *aliases*. When you deploy a Function in your Project to a Stage, it deploy a Lambda that will be immediately versioned and aliased under the name of that Stage. **AWS API Gateway REST API** If your Functions have Endpoint data in their `s-function.json` files, a REST API on AWS API Gateway will automatically be created for your Project. Projects can only have one REST API, which can be replicated across each Region your Project uses. Every API Gateway REST API can have multiple stages. When you deploy an Endpoint in your Project to a Project Stage, it builds the Endpoint on your API Gateway REST API and then creates a deployment in that API Gateway stage. **AWS Other Resources** Your Project's other AWS resources are the only AWS resources that have separate deployments for each Stage. These separate Stage deployments can be replicated across each Region your Project uses as well. [block:api-header] { "type": "basic", "title": "Creating Your Project Stages" } [/block] We recommend every Project have the following Stages. * dev * beta * prod If you are working on a team with multiple developers, we recommend giving each developer working on the Project their own Stage. In this case, your Project might have Stages like this: * dev * tom * jeff * beta * prod [block:api-header] { "type": "basic", "title": "Deploying Your Functions" } [/block] When you deploy your Function's Code (aka your AWS Lambda Function), it is given a `LATEST` version by default. However, pointing your Event Source Maps or your REST API Endpoints to `LATEST` is highly discouraged. This is because when multiple developers are working on the same Lambda function, they can trample each-other's work, if they have any resources that are pointing toward the `LATEST` version. To avoid all of this, Serverless never uses `LATEST`. Every time you create or update your Lambda Function, it is automatically versioned and aliased to the Project Stage you specify. This avoids the "trampling" issue entirely. Further, it allows Serverless to include Environment Variables and Stage-specific IAM Roles within the Lambda Function, which AWS does not yet have support for. **Here is what happens when you deploy your Function's Code:** * Your function code is copied to a distribution directory * Your regular handler file is replaced by one that Serverless adds titled `_serverless_handler`, which contains your Function's Environment Variables in-lined in the code. * Your Lambda code is compressed and uploaded to AWS in the region you chose w/ `publish: true` set. This publishes a new Lambda Function Version immediately. * An update request is sent to your deployed Lambda Version to Alias it with the specified Stage. * The IAM role specified in the s-project.json for the specified Stage is added to your Lambda on deployment. **Here is what happens when you deploy your Function's Endpoint:** * If no Stage is set, it deploys to your Project's “dev” Stage by default * The Endpoint is constructed on your REST API in the specified Region * A new Deployment is created. * After Deployment is created, the Stage variable is reset to the Stage name. [block:image] { "images": [ { "image": [ "https://files.readme.io/WKs3MX5YTiqfcJTE79Nc_serverless_deployment_flow.png", "serverless_deployment_flow.png", "3000", "2289", "#c8c8c8", "" ] } ] } [/block]
{"__v":2,"_id":"56dac0493dede50b00eacb79","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"## Don't give `AdministratorAccess` to AWS API keys\nWhile the getting started guide says to create an administrative user with `AdministratorAccess` permissions, it only does this to get you going faster. As stated this should not be done in a production environment. Our recommendation is to give your users with API access key `PowerUserAccess` at a maximum. The fallout is you will not be able to execute a cloudformation json file from the command line that creates any IAM resources. This should be done from the AWS CloudFormation UI, behind a user that has 2FA enabled and a secure password. All the Serverless tooling has a `-c, --noExeCf` option that will simply update your CloudFormation file, which can then be executed in the UI.\n\n## Keep your lambda codebase as small as possible\nThe smaller the size of code, the quicker your container gets up and running. The less code in the execution path, the quicker your runtime VM returns a result. Both of these statements verified by AWS Lambda engineers.\n\n## Reuse your Lambda code\nOrganize your functions in subfolders and have them require a `lib` folder according to your business logic just like any package. Just make sure you set the `handler` property of the `s-function.json` file to be relative to the parent folder that you require so that it is deployed along with your functions.\n\n## Keep your CloudFormation resources organized\nYou can define CF resources in the `s-project.json` or `s-module.json`. If there's a resources that is used only by a single Module, it's best to define it in the `s-module.json` to keep your Project resources organized. And if you decide to share your Module with the community, they'll have that resource available out of the box.\n\n## Initialize external services outside of your Lambda code\nWhen using services (like DynamoDB) make sure to initialize outside of your lambda code. Ex: module initializer (for Node), or to a static constructor (for Java). If you initiate a connection to DDB inside the Lambda function, that code will run on every invoke.","category":"56dac0483dede50b00eacb52","createdAt":"2015-10-16T17:33:26.678Z","excerpt":"Some best practices that we recommend while working with the Serverless Framework.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":5,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"best-practices","sync_unique":"","title":"Best Practices","type":"basic","updates":["569cb068ceb7510d00f2a556","570a99629d7b6e0e003a96f3"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Best Practices

Some best practices that we recommend while working with the Serverless Framework.

## Don't give `AdministratorAccess` to AWS API keys While the getting started guide says to create an administrative user with `AdministratorAccess` permissions, it only does this to get you going faster. As stated this should not be done in a production environment. Our recommendation is to give your users with API access key `PowerUserAccess` at a maximum. The fallout is you will not be able to execute a cloudformation json file from the command line that creates any IAM resources. This should be done from the AWS CloudFormation UI, behind a user that has 2FA enabled and a secure password. All the Serverless tooling has a `-c, --noExeCf` option that will simply update your CloudFormation file, which can then be executed in the UI. ## Keep your lambda codebase as small as possible The smaller the size of code, the quicker your container gets up and running. The less code in the execution path, the quicker your runtime VM returns a result. Both of these statements verified by AWS Lambda engineers. ## Reuse your Lambda code Organize your functions in subfolders and have them require a `lib` folder according to your business logic just like any package. Just make sure you set the `handler` property of the `s-function.json` file to be relative to the parent folder that you require so that it is deployed along with your functions. ## Keep your CloudFormation resources organized You can define CF resources in the `s-project.json` or `s-module.json`. If there's a resources that is used only by a single Module, it's best to define it in the `s-module.json` to keep your Project resources organized. And if you decide to share your Module with the community, they'll have that resource available out of the box. ## Initialize external services outside of your Lambda code When using services (like DynamoDB) make sure to initialize outside of your lambda code. Ex: module initializer (for Node), or to a static constructor (for Java). If you initiate a connection to DDB inside the Lambda function, that code will run on every invoke.
## Don't give `AdministratorAccess` to AWS API keys While the getting started guide says to create an administrative user with `AdministratorAccess` permissions, it only does this to get you going faster. As stated this should not be done in a production environment. Our recommendation is to give your users with API access key `PowerUserAccess` at a maximum. The fallout is you will not be able to execute a cloudformation json file from the command line that creates any IAM resources. This should be done from the AWS CloudFormation UI, behind a user that has 2FA enabled and a secure password. All the Serverless tooling has a `-c, --noExeCf` option that will simply update your CloudFormation file, which can then be executed in the UI. ## Keep your lambda codebase as small as possible The smaller the size of code, the quicker your container gets up and running. The less code in the execution path, the quicker your runtime VM returns a result. Both of these statements verified by AWS Lambda engineers. ## Reuse your Lambda code Organize your functions in subfolders and have them require a `lib` folder according to your business logic just like any package. Just make sure you set the `handler` property of the `s-function.json` file to be relative to the parent folder that you require so that it is deployed along with your functions. ## Keep your CloudFormation resources organized You can define CF resources in the `s-project.json` or `s-module.json`. If there's a resources that is used only by a single Module, it's best to define it in the `s-module.json` to keep your Project resources organized. And if you decide to share your Module with the community, they'll have that resource available out of the box. ## Initialize external services outside of your Lambda code When using services (like DynamoDB) make sure to initialize outside of your lambda code. Ex: module initializer (for Node), or to a static constructor (for Java). If you initiate a connection to DDB inside the Lambda function, that code will run on every invoke.
{"__v":8,"_id":"56dac0493dede50b00eacb59","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"The Serverless Framework is mostly a command line utility that makes it easy to manage your serverless projects. We've designed the CLI to be as user-friendly as possible. A typical Serverless command consists of a **context** and an **action**, along with some **options** and **parameters** where applicable. For example, for the following command:\n\n```\nserverless project create -n project-name -r us-east-1\n```\n\nThis command has a \"project\" **context**, and a \"create\" **action**. `-n project-name` and `-r us-east-1` are both options that the `serverless project create` command needs. It still needs more options, but the CLI is smart enough to know what's missing, and it'll prompt you for only what's missing. \n[block:callout]\n{\n  \"type\": \"info\",\n  \"body\": \"A parameter doesn't require the dash that an option requires (ex. `-s <some-option>`), which makes it a little easier to type. But that also means that the ordering of the parameters matter if a command needs more than one parameter.\\n\\nWe try to be consistent in our use of parameters and options. If there's a lot of data required from the user, we prefer setting them as options so that you wouldn't have to think about the ordering, but for simple commands, using parameters is better.\",\n  \"title\": \"The difference between a Parameter and an Option\"\n}\n[/block]\nHere's a summary of all the available Serverless commands:\n\n```\nserverless project create\nserverless project install\nserverless project init\nserverless project remove\n\nserverless function run\nserverless function create\nserverless function deploy\nserverless function logs\nserverless function remove\nserverless function rollback\n\nserverless endpoint deploy\nserverless endpoint remove\n\nserverless event deploy\nserverless event remove\n\nserverless dash deploy\nserverless dash summary\n\nserverless stage create\nserverless stage remove\n\nserverless region create\nserverless region remove\n\nserverless resources deploy\nserverless resources remove\nserverless resources diff\n\nserverless plugin create\n```\n[block:callout]\n{\n  \"type\": \"success\",\n  \"title\": \"A Nifty Shortcut\",\n  \"body\": \"It's a bit tedious to type the `serverless` command every time you want to do something. You can use the `sls` or the `slss` shortcut instead. For example, you can create a new project by typing: `sls project create`. This applies to **all** other commands as well. Pretty neat, ha!\"\n}\n[/block]\nYou can find detailed information about each Serverless command, its required options and parameters, along with some examples below. Keep reading!\n\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Getting Help & Debugging\",\n  \"body\": \"Whenever you feel confused, just run `serverless` in the terminal. This will generate helpful information about all the available commands. You can also add the `-h` or `--help` option to any command for information about that specific command.\\n\\nIf you're facing some bugs, just add the `--debug` option to any command, which will run the command in **Debug Mode** that generates helpful information about what's going on with the code. \\n\\nYou can also ask for help from our community by [creating a new issue on our repo](https://github.com/serverless/serverless/issues/new), or asking in our [Gitter chat room](https://gitter.im/serverless/serverless) (ping [@ac360](https://github.com/ac360) or [@eahefnawy](https://github.com/eahefnawy) for support). Just make sure you provide your installed version of the Serverless Framework by running `serverless version`.\"\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Turn Off CLI Interactivity\",\n  \"body\": \"We've designed our CLI with best user experience in mind. So most commands are in interactive mode where you'll be prompted for any missing options. However, when you're using our CLI programmatically (ie. By using Continuous Integration tools), you'll need to turn off the interactive mode. You can do that by setting an environment variable called `CI` to true.\"\n}\n[/block]","category":"56dac0483dede50b00eacb53","createdAt":"2015-12-07T12:03:00.487Z","excerpt":"An overview of how the Serverless CLI works.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":0,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"commands-overview","sync_unique":"","title":"CLI Overview","type":"basic","updates":["5683968cf46b2b0d0032b13a","568b3857d4e2360d0098013d","56f2a3342344ff0e006c014d","572e553a8285700e00c93ad9"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

CLI Overview

An overview of how the Serverless CLI works.

The Serverless Framework is mostly a command line utility that makes it easy to manage your serverless projects. We've designed the CLI to be as user-friendly as possible. A typical Serverless command consists of a **context** and an **action**, along with some **options** and **parameters** where applicable. For example, for the following command: ``` serverless project create -n project-name -r us-east-1 ``` This command has a "project" **context**, and a "create" **action**. `-n project-name` and `-r us-east-1` are both options that the `serverless project create` command needs. It still needs more options, but the CLI is smart enough to know what's missing, and it'll prompt you for only what's missing. [block:callout] { "type": "info", "body": "A parameter doesn't require the dash that an option requires (ex. `-s <some-option>`), which makes it a little easier to type. But that also means that the ordering of the parameters matter if a command needs more than one parameter.\n\nWe try to be consistent in our use of parameters and options. If there's a lot of data required from the user, we prefer setting them as options so that you wouldn't have to think about the ordering, but for simple commands, using parameters is better.", "title": "The difference between a Parameter and an Option" } [/block] Here's a summary of all the available Serverless commands: ``` serverless project create serverless project install serverless project init serverless project remove serverless function run serverless function create serverless function deploy serverless function logs serverless function remove serverless function rollback serverless endpoint deploy serverless endpoint remove serverless event deploy serverless event remove serverless dash deploy serverless dash summary serverless stage create serverless stage remove serverless region create serverless region remove serverless resources deploy serverless resources remove serverless resources diff serverless plugin create ``` [block:callout] { "type": "success", "title": "A Nifty Shortcut", "body": "It's a bit tedious to type the `serverless` command every time you want to do something. You can use the `sls` or the `slss` shortcut instead. For example, you can create a new project by typing: `sls project create`. This applies to **all** other commands as well. Pretty neat, ha!" } [/block] You can find detailed information about each Serverless command, its required options and parameters, along with some examples below. Keep reading! [block:callout] { "type": "info", "title": "Getting Help & Debugging", "body": "Whenever you feel confused, just run `serverless` in the terminal. This will generate helpful information about all the available commands. You can also add the `-h` or `--help` option to any command for information about that specific command.\n\nIf you're facing some bugs, just add the `--debug` option to any command, which will run the command in **Debug Mode** that generates helpful information about what's going on with the code. \n\nYou can also ask for help from our community by [creating a new issue on our repo](https://github.com/serverless/serverless/issues/new), or asking in our [Gitter chat room](https://gitter.im/serverless/serverless) (ping [@ac360](https://github.com/ac360) or [@eahefnawy](https://github.com/eahefnawy) for support). Just make sure you provide your installed version of the Serverless Framework by running `serverless version`." } [/block] [block:callout] { "type": "info", "title": "Turn Off CLI Interactivity", "body": "We've designed our CLI with best user experience in mind. So most commands are in interactive mode where you'll be prompted for any missing options. However, when you're using our CLI programmatically (ie. By using Continuous Integration tools), you'll need to turn off the interactive mode. You can do that by setting an environment variable called `CI` to true." } [/block]
The Serverless Framework is mostly a command line utility that makes it easy to manage your serverless projects. We've designed the CLI to be as user-friendly as possible. A typical Serverless command consists of a **context** and an **action**, along with some **options** and **parameters** where applicable. For example, for the following command: ``` serverless project create -n project-name -r us-east-1 ``` This command has a "project" **context**, and a "create" **action**. `-n project-name` and `-r us-east-1` are both options that the `serverless project create` command needs. It still needs more options, but the CLI is smart enough to know what's missing, and it'll prompt you for only what's missing. [block:callout] { "type": "info", "body": "A parameter doesn't require the dash that an option requires (ex. `-s <some-option>`), which makes it a little easier to type. But that also means that the ordering of the parameters matter if a command needs more than one parameter.\n\nWe try to be consistent in our use of parameters and options. If there's a lot of data required from the user, we prefer setting them as options so that you wouldn't have to think about the ordering, but for simple commands, using parameters is better.", "title": "The difference between a Parameter and an Option" } [/block] Here's a summary of all the available Serverless commands: ``` serverless project create serverless project install serverless project init serverless project remove serverless function run serverless function create serverless function deploy serverless function logs serverless function remove serverless function rollback serverless endpoint deploy serverless endpoint remove serverless event deploy serverless event remove serverless dash deploy serverless dash summary serverless stage create serverless stage remove serverless region create serverless region remove serverless resources deploy serverless resources remove serverless resources diff serverless plugin create ``` [block:callout] { "type": "success", "title": "A Nifty Shortcut", "body": "It's a bit tedious to type the `serverless` command every time you want to do something. You can use the `sls` or the `slss` shortcut instead. For example, you can create a new project by typing: `sls project create`. This applies to **all** other commands as well. Pretty neat, ha!" } [/block] You can find detailed information about each Serverless command, its required options and parameters, along with some examples below. Keep reading! [block:callout] { "type": "info", "title": "Getting Help & Debugging", "body": "Whenever you feel confused, just run `serverless` in the terminal. This will generate helpful information about all the available commands. You can also add the `-h` or `--help` option to any command for information about that specific command.\n\nIf you're facing some bugs, just add the `--debug` option to any command, which will run the command in **Debug Mode** that generates helpful information about what's going on with the code. \n\nYou can also ask for help from our community by [creating a new issue on our repo](https://github.com/serverless/serverless/issues/new), or asking in our [Gitter chat room](https://gitter.im/serverless/serverless) (ping [@ac360](https://github.com/ac360) or [@eahefnawy](https://github.com/eahefnawy) for support). Just make sure you provide your installed version of the Serverless Framework by running `serverless version`." } [/block] [block:callout] { "type": "info", "title": "Turn Off CLI Interactivity", "body": "We've designed our CLI with best user experience in mind. So most commands are in interactive mode where you'll be prompted for any missing options. However, when you're using our CLI programmatically (ie. By using Continuous Integration tools), you'll need to turn off the interactive mode. You can do that by setting an environment variable called `CI` to true." } [/block]
{"__v":3,"_id":"56dac0493dede50b00eacb5a","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless project create\n```\nCreates a new Serverless project in the current working directory with a default `dev` stage. It takes the following options:\n\n* `-n <name>` the name of your project.\n* `-b <bucket>` the domain of your project.\n* `-p <awsProfile>` an AWS profile that is defined in `~/.aws/credentials` file.\n* `-r <region>` a lambda supported region for your new project.\n* `-s <stage>` the first stage for your new project.\n* `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Manual steps required if -c specified\",\n  \"body\": \"If you choose to not execute the CloudFormation file by specifying the `-c` flag, make sure to either manually run `sls resources deploy` or add the `iamRoleArnLambda` attribute to the `_meta/variables/s-variables-<stage>-<region>.json` file.\"\n}\n[/block]\n### Examples\n```\nserverless project create\n```\nIn this example, all options are missing, so you'll be prompted to enter each of the required options for best user experience.\n\n```\nserverless project create -n myProject -b com.my-project\n```\nIn this example, you provided the name and the bucket of the project, so you'll only be prompted for the stage, region and profile options.\n\n```\nserverless project create -c true\n```\nIn this example, you've instructed Serverless to **not** execute CloudFormation. So after you enter the project information, the CloudFormation stack won't be created, and you'll have to manually upload the relevant CF template file from the `_meta/resources` folder to AWS.","category":"56dac0483dede50b00eacb53","createdAt":"2015-12-02T14:17:05.360Z","excerpt":"Creates a new Serverless project.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":1,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"project-create","sync_unique":"","title":"Project Create","type":"basic","updates":["56fd9775aa7b710e007d37e8","571fafe2a0acd42000af9566"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Project Create

Creates a new Serverless project.

``` serverless project create ``` Creates a new Serverless project in the current working directory with a default `dev` stage. It takes the following options: * `-n <name>` the name of your project. * `-b <bucket>` the domain of your project. * `-p <awsProfile>` an AWS profile that is defined in `~/.aws/credentials` file. * `-r <region>` a lambda supported region for your new project. * `-s <stage>` the first stage for your new project. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. [block:callout] { "type": "warning", "title": "Manual steps required if -c specified", "body": "If you choose to not execute the CloudFormation file by specifying the `-c` flag, make sure to either manually run `sls resources deploy` or add the `iamRoleArnLambda` attribute to the `_meta/variables/s-variables-<stage>-<region>.json` file." } [/block] ### Examples ``` serverless project create ``` In this example, all options are missing, so you'll be prompted to enter each of the required options for best user experience. ``` serverless project create -n myProject -b com.my-project ``` In this example, you provided the name and the bucket of the project, so you'll only be prompted for the stage, region and profile options. ``` serverless project create -c true ``` In this example, you've instructed Serverless to **not** execute CloudFormation. So after you enter the project information, the CloudFormation stack won't be created, and you'll have to manually upload the relevant CF template file from the `_meta/resources` folder to AWS.
``` serverless project create ``` Creates a new Serverless project in the current working directory with a default `dev` stage. It takes the following options: * `-n <name>` the name of your project. * `-b <bucket>` the domain of your project. * `-p <awsProfile>` an AWS profile that is defined in `~/.aws/credentials` file. * `-r <region>` a lambda supported region for your new project. * `-s <stage>` the first stage for your new project. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. [block:callout] { "type": "warning", "title": "Manual steps required if -c specified", "body": "If you choose to not execute the CloudFormation file by specifying the `-c` flag, make sure to either manually run `sls resources deploy` or add the `iamRoleArnLambda` attribute to the `_meta/variables/s-variables-<stage>-<region>.json` file." } [/block] ### Examples ``` serverless project create ``` In this example, all options are missing, so you'll be prompted to enter each of the required options for best user experience. ``` serverless project create -n myProject -b com.my-project ``` In this example, you provided the name and the bucket of the project, so you'll only be prompted for the stage, region and profile options. ``` serverless project create -c true ``` In this example, you've instructed Serverless to **not** execute CloudFormation. So after you enter the project information, the CloudFormation stack won't be created, and you'll have to manually upload the relevant CF template file from the `_meta/resources` folder to AWS.
{"__v":2,"_id":"56dac0493dede50b00eacb5b","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"```\nserverless project install\n```\nInstalls a new Serverless project in the current working directory with a default `dev` stage. It takes the following parameters and options:\n\n* `<npm-module-name>` (parameter): the name of the Serverless project you want to install that is published on npm.\n* `-n <name>` the name of your newly installed project. This will replace the npm name you provided earlier.\n* `-b <bucket>` the bucket name of your installed project.\n* `-p <awsProfile>` an AWS profile that is defined in `~/.aws/credentials` file.\n* `-s <stage>` the first stage for your installed project.\n* `-r <region>` a lambda supported region for your installed project.\n* `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**.\n\n### Examples\n```\nserverless project install serverless-starter\n```\nIn this example, you've passed the required parameter (the serverless project you want to install), but all options are missing, so you'll be prompted to enter each of the required options for best user experience. Just like with `serverless project create`.","category":"56dac0483dede50b00eacb53","createdAt":"2016-02-03T03:16:13.494Z","excerpt":"Installs a published Serverless project","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":2,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"project-install","sync_unique":"","title":"Project Install","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Project Install

Installs a published Serverless project

``` serverless project install ``` Installs a new Serverless project in the current working directory with a default `dev` stage. It takes the following parameters and options: * `<npm-module-name>` (parameter): the name of the Serverless project you want to install that is published on npm. * `-n <name>` the name of your newly installed project. This will replace the npm name you provided earlier. * `-b <bucket>` the bucket name of your installed project. * `-p <awsProfile>` an AWS profile that is defined in `~/.aws/credentials` file. * `-s <stage>` the first stage for your installed project. * `-r <region>` a lambda supported region for your installed project. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless project install serverless-starter ``` In this example, you've passed the required parameter (the serverless project you want to install), but all options are missing, so you'll be prompted to enter each of the required options for best user experience. Just like with `serverless project create`.
``` serverless project install ``` Installs a new Serverless project in the current working directory with a default `dev` stage. It takes the following parameters and options: * `<npm-module-name>` (parameter): the name of the Serverless project you want to install that is published on npm. * `-n <name>` the name of your newly installed project. This will replace the npm name you provided earlier. * `-b <bucket>` the bucket name of your installed project. * `-p <awsProfile>` an AWS profile that is defined in `~/.aws/credentials` file. * `-s <stage>` the first stage for your installed project. * `-r <region>` a lambda supported region for your installed project. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless project install serverless-starter ``` In this example, you've passed the required parameter (the serverless project you want to install), but all options are missing, so you'll be prompted to enter each of the required options for best user experience. Just like with `serverless project create`.
{"__v":1,"_id":"56dac0493dede50b00eacb5c","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"```\nserverless project init\n```\n**Must be run within a Serverless Project**. Initializes a new Serverless project. It's useful when you `git clone` a Serverless project, and want to reconstruct the `_meta` folder and set up the project on AWS. It takes the following options:\n\n* `-n <name>` the name of your project.\n* `-b <bucket>` the bucket of your project.\n* `-p <awsProfile>` an AWS profile that is defined in `~/.aws/credentials` file.\n* `-s <stage>` the first stage for your new project. \n* `-r <region>` a lambda supported region for your project.\n* `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**.\n\n### Examples\n```\nserverless project init\n```\nAfter you `git clone` a Serverless Project and `cd` inside the root directory of the project. You can run `serverless project init` to reconstruct the `_meta` folder and set up your project on your own AWS account. It's very similar to `sls project create`, except that you're using a shared project with all the code written for you.","category":"56dac0483dede50b00eacb53","createdAt":"2016-02-03T03:22:45.497Z","excerpt":"Initializes a new Serverless project","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":3,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"project-init","sync_unique":"","title":"Project Init","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Project Init

Initializes a new Serverless project

``` serverless project init ``` **Must be run within a Serverless Project**. Initializes a new Serverless project. It's useful when you `git clone` a Serverless project, and want to reconstruct the `_meta` folder and set up the project on AWS. It takes the following options: * `-n <name>` the name of your project. * `-b <bucket>` the bucket of your project. * `-p <awsProfile>` an AWS profile that is defined in `~/.aws/credentials` file. * `-s <stage>` the first stage for your new project. * `-r <region>` a lambda supported region for your project. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless project init ``` After you `git clone` a Serverless Project and `cd` inside the root directory of the project. You can run `serverless project init` to reconstruct the `_meta` folder and set up your project on your own AWS account. It's very similar to `sls project create`, except that you're using a shared project with all the code written for you.
``` serverless project init ``` **Must be run within a Serverless Project**. Initializes a new Serverless project. It's useful when you `git clone` a Serverless project, and want to reconstruct the `_meta` folder and set up the project on AWS. It takes the following options: * `-n <name>` the name of your project. * `-b <bucket>` the bucket of your project. * `-p <awsProfile>` an AWS profile that is defined in `~/.aws/credentials` file. * `-s <stage>` the first stage for your new project. * `-r <region>` a lambda supported region for your project. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless project init ``` After you `git clone` a Serverless Project and `cd` inside the root directory of the project. You can run `serverless project init` to reconstruct the `_meta` folder and set up your project on your own AWS account. It's very similar to `sls project create`, except that you're using a shared project with all the code written for you.
{"__v":1,"_id":"56dac0493dede50b00eacb5d","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"```\nserverless project remove\n```\n**Must be run within a Serverless Project**. Removes and cleans up your Serverless Project from AWS. It takes the following options:\n\n* `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**.\n\n### Examples\n```\nserverless project remove\n```\nIn this example, the command will remove all stages, regions, CF resources, functions, endpoints and events from AWS. This command is useful when you want to clean up your deprecated projects form AWS.\n\n```\nserverless project remove -c\n```\nIn this example, you've set the `-c` to true, so the command will remove your whole project from your account, but **it won't remove your CF resources**. It'll output a CF template in the `_meta/resources` folder that you can upload to the AWS console to manually remove your CF resources.","category":"56dac0483dede50b00eacb53","createdAt":"2016-02-03T03:34:45.982Z","excerpt":"Removes and cleans up your Serverless Project from AWS.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":4,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"project-remove","sync_unique":"","title":"Project Remove","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Project Remove

Removes and cleans up your Serverless Project from AWS.

``` serverless project remove ``` **Must be run within a Serverless Project**. Removes and cleans up your Serverless Project from AWS. It takes the following options: * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless project remove ``` In this example, the command will remove all stages, regions, CF resources, functions, endpoints and events from AWS. This command is useful when you want to clean up your deprecated projects form AWS. ``` serverless project remove -c ``` In this example, you've set the `-c` to true, so the command will remove your whole project from your account, but **it won't remove your CF resources**. It'll output a CF template in the `_meta/resources` folder that you can upload to the AWS console to manually remove your CF resources.
``` serverless project remove ``` **Must be run within a Serverless Project**. Removes and cleans up your Serverless Project from AWS. It takes the following options: * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless project remove ``` In this example, the command will remove all stages, regions, CF resources, functions, endpoints and events from AWS. This command is useful when you want to clean up your deprecated projects form AWS. ``` serverless project remove -c ``` In this example, you've set the `-c` to true, so the command will remove your whole project from your account, but **it won't remove your CF resources**. It'll output a CF template in the `_meta/resources` folder that you can upload to the AWS console to manually remove your CF resources.
{"__v":1,"_id":"56dac0493dede50b00eacb5e","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless stage create\n```\n**Must be run inside a Serverless project.** It creates a new stage and its first region for your project. It takes the following options:\n\n* `-s <stage>` The name of your new stage.\n* `-r <region>` A lambda supported region for your new stage.\n* `-p <awsProfile>` An AWS profile to use for this stage.\n\n### Examples\n```\nserverless stage create\n```\nIn this example, all options are missing, so you'll be prompted to enter each of the required options for best user experience.\n\n```\nserverless stage create -s production -r us-east-1 -p default\n```\nIn this example, you provided all the required options. So your stage will be created right away without prompting for anything else.","category":"56dac0483dede50b00eacb53","createdAt":"2015-12-02T14:17:17.185Z","excerpt":"creates a new stage for your serverless project","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":5,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"stage-create","sync_unique":"","title":"Stage Create","type":"basic","updates":["5688f4c5e1f9a00d00350af1"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Stage Create

creates a new stage for your serverless project

``` serverless stage create ``` **Must be run inside a Serverless project.** It creates a new stage and its first region for your project. It takes the following options: * `-s <stage>` The name of your new stage. * `-r <region>` A lambda supported region for your new stage. * `-p <awsProfile>` An AWS profile to use for this stage. ### Examples ``` serverless stage create ``` In this example, all options are missing, so you'll be prompted to enter each of the required options for best user experience. ``` serverless stage create -s production -r us-east-1 -p default ``` In this example, you provided all the required options. So your stage will be created right away without prompting for anything else.
``` serverless stage create ``` **Must be run inside a Serverless project.** It creates a new stage and its first region for your project. It takes the following options: * `-s <stage>` The name of your new stage. * `-r <region>` A lambda supported region for your new stage. * `-p <awsProfile>` An AWS profile to use for this stage. ### Examples ``` serverless stage create ``` In this example, all options are missing, so you'll be prompted to enter each of the required options for best user experience. ``` serverless stage create -s production -r us-east-1 -p default ``` In this example, you provided all the required options. So your stage will be created right away without prompting for anything else.
{"__v":1,"_id":"56dac0493dede50b00eacb5f","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"```\nserverless stage remove\n```\n**Must be run within a Serverless Project**. Removes an existing stage from your Serverless Project and AWS account, along with the CF resources defined in that stage. It takes the following options:\n\n* `-s <stage>` the stage you want to remove from your Serverless project.\n* `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**.\n\n### Examples\n```\nserverless stage remove -s production\n```\nIn this example, the command will instantly remove the `prod` stage from your Serverless Project and AWS account along with any CF resources defined in that stage.\n\n```\nserverless stage remove -s prod -c\n```\nIn this example, you've set the `-c` option to `true`, so the command will remove the `prod` stage from your Serverless Project and AWS account, but **it won't remove the CF resources for that stage**. Instead, it'll output a CF template in the `_meta/resources` folder that you can execute manually on the AWS console.","category":"56dac0483dede50b00eacb53","createdAt":"2016-02-03T03:44:42.754Z","excerpt":"Removes a stage from your Serverless Project","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":6,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"stage-remove","sync_unique":"","title":"Stage Remove","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Stage Remove

Removes a stage from your Serverless Project

``` serverless stage remove ``` **Must be run within a Serverless Project**. Removes an existing stage from your Serverless Project and AWS account, along with the CF resources defined in that stage. It takes the following options: * `-s <stage>` the stage you want to remove from your Serverless project. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless stage remove -s production ``` In this example, the command will instantly remove the `prod` stage from your Serverless Project and AWS account along with any CF resources defined in that stage. ``` serverless stage remove -s prod -c ``` In this example, you've set the `-c` option to `true`, so the command will remove the `prod` stage from your Serverless Project and AWS account, but **it won't remove the CF resources for that stage**. Instead, it'll output a CF template in the `_meta/resources` folder that you can execute manually on the AWS console.
``` serverless stage remove ``` **Must be run within a Serverless Project**. Removes an existing stage from your Serverless Project and AWS account, along with the CF resources defined in that stage. It takes the following options: * `-s <stage>` the stage you want to remove from your Serverless project. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless stage remove -s production ``` In this example, the command will instantly remove the `prod` stage from your Serverless Project and AWS account along with any CF resources defined in that stage. ``` serverless stage remove -s prod -c ``` In this example, you've set the `-c` option to `true`, so the command will remove the `prod` stage from your Serverless Project and AWS account, but **it won't remove the CF resources for that stage**. Instead, it'll output a CF template in the `_meta/resources` folder that you can execute manually on the AWS console.
{"__v":3,"_id":"56dac0493dede50b00eacb60","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless region create\n```\n**Must be run inside a Serverless project.** It creates a new region for an existing stage inside your project. It takes the following options:\n\n* `-s <stage>` The name of the stage you want to add a region to.\n* `-r <region>` A lambda supported region for your chosen stage.\n\n### Examples\n```\nserverless region create\n```\nIn this example, all options are missing, so you'll be prompted to enter each of the required options for best user experience.\n\n```\nserverless region create -s prod -r us-east-1\n```\nIn this example, you're creating a new `us-east-1` region in the `prod` stage. If the `production` stage doesn't exist in your project, or already has `us-east-1` region defined, you'll get an error.","category":"56dac0483dede50b00eacb53","createdAt":"2015-12-02T14:17:40.588Z","excerpt":"Creates a new region for an existing stage.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":7,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"region-create","sync_unique":"","title":"Region Create","type":"basic","updates":["5688f4f96ac8f90d0043c4f5","572e9ceca642de0e00a7252b"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Region Create

Creates a new region for an existing stage.

``` serverless region create ``` **Must be run inside a Serverless project.** It creates a new region for an existing stage inside your project. It takes the following options: * `-s <stage>` The name of the stage you want to add a region to. * `-r <region>` A lambda supported region for your chosen stage. ### Examples ``` serverless region create ``` In this example, all options are missing, so you'll be prompted to enter each of the required options for best user experience. ``` serverless region create -s prod -r us-east-1 ``` In this example, you're creating a new `us-east-1` region in the `prod` stage. If the `production` stage doesn't exist in your project, or already has `us-east-1` region defined, you'll get an error.
``` serverless region create ``` **Must be run inside a Serverless project.** It creates a new region for an existing stage inside your project. It takes the following options: * `-s <stage>` The name of the stage you want to add a region to. * `-r <region>` A lambda supported region for your chosen stage. ### Examples ``` serverless region create ``` In this example, all options are missing, so you'll be prompted to enter each of the required options for best user experience. ``` serverless region create -s prod -r us-east-1 ``` In this example, you're creating a new `us-east-1` region in the `prod` stage. If the `production` stage doesn't exist in your project, or already has `us-east-1` region defined, you'll get an error.
{"__v":1,"_id":"56dac0493dede50b00eacb61","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"```\nserverless region remove\n```\n**Must be run within a Serverless Project**. Removes an existing region from an existing stage in your Serverless Project, along with the CF resources defined in that region. It takes the following options:\n\n* `-s <stage>` the stage that contains the region you want to remove from your Serverless Project.\n* `-r <region>` the region you want to remove from your Serverless project.\n* `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**.\n\n### Examples\n```\nserverless region remove -s prod -r us-west-2\n```\nIn this example, the command will instantly remove the `us-west-2` region from the `prod` stage in your Serverless Project. It will also clean up your AWS account from that region along with any CF resources defined in that region.\n\n```\nserverless region remove -s prod -r us-west-2 -c\n```\nIn this example, you've set the `-c` option to `true`, so the command will remove the `us-west-2` region from the `prod` stage, but **it won't remove the CF resources for that region**. Instead, it'll output a CF template in the `_meta/resources` folder that you can execute manually on the AWS console.","category":"56dac0483dede50b00eacb53","createdAt":"2016-02-03T03:50:17.284Z","excerpt":"Removes a region from a stage in your Serverless Project.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":8,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"region-remove","sync_unique":"","title":"Region Remove","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Region Remove

Removes a region from a stage in your Serverless Project.

``` serverless region remove ``` **Must be run within a Serverless Project**. Removes an existing region from an existing stage in your Serverless Project, along with the CF resources defined in that region. It takes the following options: * `-s <stage>` the stage that contains the region you want to remove from your Serverless Project. * `-r <region>` the region you want to remove from your Serverless project. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless region remove -s prod -r us-west-2 ``` In this example, the command will instantly remove the `us-west-2` region from the `prod` stage in your Serverless Project. It will also clean up your AWS account from that region along with any CF resources defined in that region. ``` serverless region remove -s prod -r us-west-2 -c ``` In this example, you've set the `-c` option to `true`, so the command will remove the `us-west-2` region from the `prod` stage, but **it won't remove the CF resources for that region**. Instead, it'll output a CF template in the `_meta/resources` folder that you can execute manually on the AWS console.
``` serverless region remove ``` **Must be run within a Serverless Project**. Removes an existing region from an existing stage in your Serverless Project, along with the CF resources defined in that region. It takes the following options: * `-s <stage>` the stage that contains the region you want to remove from your Serverless Project. * `-r <region>` the region you want to remove from your Serverless project. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless region remove -s prod -r us-west-2 ``` In this example, the command will instantly remove the `us-west-2` region from the `prod` stage in your Serverless Project. It will also clean up your AWS account from that region along with any CF resources defined in that region. ``` serverless region remove -s prod -r us-west-2 -c ``` In this example, you've set the `-c` option to `true`, so the command will remove the `us-west-2` region from the `prod` stage, but **it won't remove the CF resources for that region**. Instead, it'll output a CF template in the `_meta/resources` folder that you can execute manually on the AWS console.
{"__v":3,"_id":"56dac0493dede50b00eacb63","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless function create\n```\n**Must be run inside a Serverless Project.** It generates a basic scaffolding for a new Function. Where this function is located inside your project depends on the path you provide. This command takes the following parameters and options:\n\n* `functionPath` The relative path to the function you want to create (relative to the project root). The command will create any folders required no matter how deep the path is to make that path valid.\n* `-r <runtime>` Optional  - The runtime of your new function. Default is `nodejs`. The only other supported runtime is `python2.7`\n\n### Examples\n```\nserverless function create function1\n```\nIn this example, you'll create a function named `function1` in the root of the project.\n```\nserverless function create functions/function1\n```\nIn this example, you'll create a function named `function1` inside a `functions` folder in the root of the project.\n\n```\nserverless function create functions/subfolder/function1\n```\nIn this example, you'll create a function named `function1` inside the `subfolder` subfolder inside a `functions` folder in the root of the project. The command simply creates any subfolders you need to make your function path valid.\n\n```\nserverless function create\n```\nIn this example, you did not provide a function path, so you'll be prompted to enter a function name. For best user experience, the exact location of your new function within your project will be based on the current working directory. If you run this command in the project root, you'll create the function in the root of the project, if you run it inside any folder, you'll create it inside that folder.\n\n```\nserverless function create -r python2.7\n```\nThis example is identical to the previous example, but because you provided a runtime option with the value of `python2.7`, the created function will be a python function, and you'll have a `handler.py` instead of `handler.js` in the newly created function folder.","category":"56dac0483dede50b00eacb53","createdAt":"2015-12-02T14:18:21.457Z","excerpt":"Creates a New Function for Your Project","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":9,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"function-create","sync_unique":"","title":"Function Create","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Function Create

Creates a New Function for Your Project

``` serverless function create ``` **Must be run inside a Serverless Project.** It generates a basic scaffolding for a new Function. Where this function is located inside your project depends on the path you provide. This command takes the following parameters and options: * `functionPath` The relative path to the function you want to create (relative to the project root). The command will create any folders required no matter how deep the path is to make that path valid. * `-r <runtime>` Optional - The runtime of your new function. Default is `nodejs`. The only other supported runtime is `python2.7` ### Examples ``` serverless function create function1 ``` In this example, you'll create a function named `function1` in the root of the project. ``` serverless function create functions/function1 ``` In this example, you'll create a function named `function1` inside a `functions` folder in the root of the project. ``` serverless function create functions/subfolder/function1 ``` In this example, you'll create a function named `function1` inside the `subfolder` subfolder inside a `functions` folder in the root of the project. The command simply creates any subfolders you need to make your function path valid. ``` serverless function create ``` In this example, you did not provide a function path, so you'll be prompted to enter a function name. For best user experience, the exact location of your new function within your project will be based on the current working directory. If you run this command in the project root, you'll create the function in the root of the project, if you run it inside any folder, you'll create it inside that folder. ``` serverless function create -r python2.7 ``` This example is identical to the previous example, but because you provided a runtime option with the value of `python2.7`, the created function will be a python function, and you'll have a `handler.py` instead of `handler.js` in the newly created function folder.
``` serverless function create ``` **Must be run inside a Serverless Project.** It generates a basic scaffolding for a new Function. Where this function is located inside your project depends on the path you provide. This command takes the following parameters and options: * `functionPath` The relative path to the function you want to create (relative to the project root). The command will create any folders required no matter how deep the path is to make that path valid. * `-r <runtime>` Optional - The runtime of your new function. Default is `nodejs`. The only other supported runtime is `python2.7` ### Examples ``` serverless function create function1 ``` In this example, you'll create a function named `function1` in the root of the project. ``` serverless function create functions/function1 ``` In this example, you'll create a function named `function1` inside a `functions` folder in the root of the project. ``` serverless function create functions/subfolder/function1 ``` In this example, you'll create a function named `function1` inside the `subfolder` subfolder inside a `functions` folder in the root of the project. The command simply creates any subfolders you need to make your function path valid. ``` serverless function create ``` In this example, you did not provide a function path, so you'll be prompted to enter a function name. For best user experience, the exact location of your new function within your project will be based on the current working directory. If you run this command in the project root, you'll create the function in the root of the project, if you run it inside any folder, you'll create it inside that folder. ``` serverless function create -r python2.7 ``` This example is identical to the previous example, but because you provided a runtime option with the value of `python2.7`, the created function will be a python function, and you'll have a `handler.py` instead of `handler.js` in the newly created function folder.
{"__v":6,"_id":"56dac0493dede50b00eacb66","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless function deploy\n```\n**Must be run inside a Serverless Project.** It deploys your function to AWS. This command takes the following options and parameters:\n\n* functionNames (parameter): The names of the functions you want to deploy. Can be one or many function names.\n* `-s <stage>` The stage you want to deploy your functions to. Optional if your project has only one stage.\n* `-r <region>` Optional - The AWS region you want to deploy your function to. If not provided, the function will be deployed to **all** the regions defined in your chosen stage by default.\n* `-f <alias>` Optional - Sets an alias for your function.\n* `-a <all>` Optional - Deploy all your functions.\n* `-t <dontRemoveTemp>` Optional - Do not remove `_tmp` folder.\n\n### Examples\n```\nserverless function deploy\n```\nIf you don't provide a function name like in this example, the command will behave depending on where in your Project you're running it. If you're running in your project root directory, it'll deploy all the functions in your project, if you're running in a subfolder, it'll only deploy all the functions inside that subfolder, if you're running in a Function, it'll deploy only this Function.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Beware of the `handler` Property\",\n  \"body\": \"We mentioned it in the [Function Configuration Section](/v0.5.0/docs/function-configuration) already, but to emphasize we'll mention it again. The `handler` property of your function will determine the root of the deployed lambda package. So if your function require code from any parent folder, make sure you set the handler property path to be **relative to that parent folder**. By default, it's relative to the function folder only, so we're assuming you have a simple function that is not requiring any code from any parent folders.\"\n}\n[/block]\n```\nserverless function deploy myFunction -s prod -r us-east-1\n```\nIn this example, you'll instantly deploy the `myFunction` function. The function will be deployed to the `us-east-1` region in the `prod` stage.\n\n```\nserverless function deploy myFunction -f myAlias\n```\nIn this example, you'll be prompted to choose a stage if your project has more than one stage. After that, the command will deploy the `myFunction` function all regions defined in the chosen stage while setting an alias `myAlias` to your function.\n\n```\nserverless function deploy myFunction myOtherFunction\n```\nIn this example, you'll deploy the **two** functions `myFunction` and `myOtherFunction` because you provided two Funciton names. But first you'll be prompted to choose a stage if your Project has more than one stage, after that deployment to all regions in that stage will begin.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Deploy Resources Before Functions\",\n  \"body\": \"Function deployment requires the `iamRoleArnLambda` variable, which is generated only after you deploy your resources. By default, resources are deployed automatically whenever you create a new stage/region, but if you explicitly chose not to deploy your resources when you created your stage/region with the `-c` option, remember to deploy the resources manually with `sls resources deploy` to the same stage/region you want to deploy your functions to before you deploy your functions.\"\n}\n[/block]","category":"56dac0483dede50b00eacb53","createdAt":"2015-12-02T14:18:59.133Z","excerpt":"Deploys your function to AWS.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":10,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"function-deploy","sync_unique":"","title":"Function Deploy","type":"basic","updates":["569e3f620306a10d00ce9aca","5715249bff0cce190056ee4f","5720e852db52d01700f5d26b"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Function Deploy

Deploys your function to AWS.

``` serverless function deploy ``` **Must be run inside a Serverless Project.** It deploys your function to AWS. This command takes the following options and parameters: * functionNames (parameter): The names of the functions you want to deploy. Can be one or many function names. * `-s <stage>` The stage you want to deploy your functions to. Optional if your project has only one stage. * `-r <region>` Optional - The AWS region you want to deploy your function to. If not provided, the function will be deployed to **all** the regions defined in your chosen stage by default. * `-f <alias>` Optional - Sets an alias for your function. * `-a <all>` Optional - Deploy all your functions. * `-t <dontRemoveTemp>` Optional - Do not remove `_tmp` folder. ### Examples ``` serverless function deploy ``` If you don't provide a function name like in this example, the command will behave depending on where in your Project you're running it. If you're running in your project root directory, it'll deploy all the functions in your project, if you're running in a subfolder, it'll only deploy all the functions inside that subfolder, if you're running in a Function, it'll deploy only this Function. [block:callout] { "type": "warning", "title": "Beware of the `handler` Property", "body": "We mentioned it in the [Function Configuration Section](/v0.5.0/docs/function-configuration) already, but to emphasize we'll mention it again. The `handler` property of your function will determine the root of the deployed lambda package. So if your function require code from any parent folder, make sure you set the handler property path to be **relative to that parent folder**. By default, it's relative to the function folder only, so we're assuming you have a simple function that is not requiring any code from any parent folders." } [/block] ``` serverless function deploy myFunction -s prod -r us-east-1 ``` In this example, you'll instantly deploy the `myFunction` function. The function will be deployed to the `us-east-1` region in the `prod` stage. ``` serverless function deploy myFunction -f myAlias ``` In this example, you'll be prompted to choose a stage if your project has more than one stage. After that, the command will deploy the `myFunction` function all regions defined in the chosen stage while setting an alias `myAlias` to your function. ``` serverless function deploy myFunction myOtherFunction ``` In this example, you'll deploy the **two** functions `myFunction` and `myOtherFunction` because you provided two Funciton names. But first you'll be prompted to choose a stage if your Project has more than one stage, after that deployment to all regions in that stage will begin. [block:callout] { "type": "warning", "title": "Deploy Resources Before Functions", "body": "Function deployment requires the `iamRoleArnLambda` variable, which is generated only after you deploy your resources. By default, resources are deployed automatically whenever you create a new stage/region, but if you explicitly chose not to deploy your resources when you created your stage/region with the `-c` option, remember to deploy the resources manually with `sls resources deploy` to the same stage/region you want to deploy your functions to before you deploy your functions." } [/block]
``` serverless function deploy ``` **Must be run inside a Serverless Project.** It deploys your function to AWS. This command takes the following options and parameters: * functionNames (parameter): The names of the functions you want to deploy. Can be one or many function names. * `-s <stage>` The stage you want to deploy your functions to. Optional if your project has only one stage. * `-r <region>` Optional - The AWS region you want to deploy your function to. If not provided, the function will be deployed to **all** the regions defined in your chosen stage by default. * `-f <alias>` Optional - Sets an alias for your function. * `-a <all>` Optional - Deploy all your functions. * `-t <dontRemoveTemp>` Optional - Do not remove `_tmp` folder. ### Examples ``` serverless function deploy ``` If you don't provide a function name like in this example, the command will behave depending on where in your Project you're running it. If you're running in your project root directory, it'll deploy all the functions in your project, if you're running in a subfolder, it'll only deploy all the functions inside that subfolder, if you're running in a Function, it'll deploy only this Function. [block:callout] { "type": "warning", "title": "Beware of the `handler` Property", "body": "We mentioned it in the [Function Configuration Section](/v0.5.0/docs/function-configuration) already, but to emphasize we'll mention it again. The `handler` property of your function will determine the root of the deployed lambda package. So if your function require code from any parent folder, make sure you set the handler property path to be **relative to that parent folder**. By default, it's relative to the function folder only, so we're assuming you have a simple function that is not requiring any code from any parent folders." } [/block] ``` serverless function deploy myFunction -s prod -r us-east-1 ``` In this example, you'll instantly deploy the `myFunction` function. The function will be deployed to the `us-east-1` region in the `prod` stage. ``` serverless function deploy myFunction -f myAlias ``` In this example, you'll be prompted to choose a stage if your project has more than one stage. After that, the command will deploy the `myFunction` function all regions defined in the chosen stage while setting an alias `myAlias` to your function. ``` serverless function deploy myFunction myOtherFunction ``` In this example, you'll deploy the **two** functions `myFunction` and `myOtherFunction` because you provided two Funciton names. But first you'll be prompted to choose a stage if your Project has more than one stage, after that deployment to all regions in that stage will begin. [block:callout] { "type": "warning", "title": "Deploy Resources Before Functions", "body": "Function deployment requires the `iamRoleArnLambda` variable, which is generated only after you deploy your resources. By default, resources are deployed automatically whenever you create a new stage/region, but if you explicitly chose not to deploy your resources when you created your stage/region with the `-c` option, remember to deploy the resources manually with `sls resources deploy` to the same stage/region you want to deploy your functions to before you deploy your functions." } [/block]
{"__v":2,"_id":"56ddafaf9888d23200c44192","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"```\nserverless function remove\n```\n**Must be run inside a Serverless Project.** It removes deployed functions from your AWS account based on the provided stage/region. It takes the following options and parameters:\n\n* functionNames (parameter): The names of the functions you want to remove. Can be one or many function names.\n* `-s <stage>` the stage you want to remove functions from.\n* `-r <region>` the region in your chosen stage you want to remove functions from.\n* `-a <BOOLEAN>` **Optional** - removes all functions from your AWS account. Default is false (obviously! :)\n\n### Examples\n```\nserverless function remove myFunction\n```\nIn this example, you'll be prompted to choose a stage and region to remove your functions from. After that the `myFunction` function will be removed from your AWS account.\n\n```\nserverless function remove myFunction myOtherFunction -s prod -r us-east-1\n```\nIn this example, you'll instantly remove the functions `myFunction` and `myOtherFunction` from the `us-east-1` region in the `prod` stage.\n\n```\nserverless function remove --all -s prod -r us-east-1\n```\nIn this example, you'll instantly remove all the functions in your project from the `us-east-1` region in the `prod` stage. Super dangerous!","category":"56dac0483dede50b00eacb53","createdAt":"2016-03-07T16:43:27.897Z","excerpt":"Removes deployed functions from your AWS account based on the provided stage/region.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":11,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"function-remove","sync_unique":"","title":"Function Remove","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Function Remove

Removes deployed functions from your AWS account based on the provided stage/region.

``` serverless function remove ``` **Must be run inside a Serverless Project.** It removes deployed functions from your AWS account based on the provided stage/region. It takes the following options and parameters: * functionNames (parameter): The names of the functions you want to remove. Can be one or many function names. * `-s <stage>` the stage you want to remove functions from. * `-r <region>` the region in your chosen stage you want to remove functions from. * `-a <BOOLEAN>` **Optional** - removes all functions from your AWS account. Default is false (obviously! :) ### Examples ``` serverless function remove myFunction ``` In this example, you'll be prompted to choose a stage and region to remove your functions from. After that the `myFunction` function will be removed from your AWS account. ``` serverless function remove myFunction myOtherFunction -s prod -r us-east-1 ``` In this example, you'll instantly remove the functions `myFunction` and `myOtherFunction` from the `us-east-1` region in the `prod` stage. ``` serverless function remove --all -s prod -r us-east-1 ``` In this example, you'll instantly remove all the functions in your project from the `us-east-1` region in the `prod` stage. Super dangerous!
``` serverless function remove ``` **Must be run inside a Serverless Project.** It removes deployed functions from your AWS account based on the provided stage/region. It takes the following options and parameters: * functionNames (parameter): The names of the functions you want to remove. Can be one or many function names. * `-s <stage>` the stage you want to remove functions from. * `-r <region>` the region in your chosen stage you want to remove functions from. * `-a <BOOLEAN>` **Optional** - removes all functions from your AWS account. Default is false (obviously! :) ### Examples ``` serverless function remove myFunction ``` In this example, you'll be prompted to choose a stage and region to remove your functions from. After that the `myFunction` function will be removed from your AWS account. ``` serverless function remove myFunction myOtherFunction -s prod -r us-east-1 ``` In this example, you'll instantly remove the functions `myFunction` and `myOtherFunction` from the `us-east-1` region in the `prod` stage. ``` serverless function remove --all -s prod -r us-east-1 ``` In this example, you'll instantly remove all the functions in your project from the `us-east-1` region in the `prod` stage. Super dangerous!
{"__v":0,"_id":"56ddc3c790559a2900a3a4a5","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"```\nserverless function rollback\n```\n**Must be run inside a Serverless Project.** This command lets you roll back a deployed function to a previous version. It takes the following options and parameters:\n\n* functionName (parameter): The name of the function you want to rollback.\n* `-s <stage>` The stage you want to rollback your function in.\n* `-r <region>` The AWS region you want to rollback your function in. Optional if your stage has only one region.\n* `-v <NUMBER>` The version you want to rollback your function to.\n* `-m <NUMBER>` Optional - Maximum number of versions to show in prompt. Default is 50.\n\n### Examples\n```\nserverless function rollback myFunction\n```\nIn this example, you're trying to rollback the `myFunction` function, but you didn't provide a region, stage or a version number, so you'll be prompted for all these required options. After you make your choices your deployed function will roll back to the chosen version.\n\n```\nserverless function rollback myFunction -s prod -r us-east-1 -v 4\n```\nIn this example, you'll instantly rollback the `myFunction` function to version 4, in the `us-east-1` region in the `prod` stage.","category":"56dac0483dede50b00eacb53","createdAt":"2016-03-07T18:09:11.442Z","excerpt":"Rollback a deployed function to a previous version.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":12,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"function-rollback","sync_unique":"","title":"Function Rollback","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Function Rollback

Rollback a deployed function to a previous version.

``` serverless function rollback ``` **Must be run inside a Serverless Project.** This command lets you roll back a deployed function to a previous version. It takes the following options and parameters: * functionName (parameter): The name of the function you want to rollback. * `-s <stage>` The stage you want to rollback your function in. * `-r <region>` The AWS region you want to rollback your function in. Optional if your stage has only one region. * `-v <NUMBER>` The version you want to rollback your function to. * `-m <NUMBER>` Optional - Maximum number of versions to show in prompt. Default is 50. ### Examples ``` serverless function rollback myFunction ``` In this example, you're trying to rollback the `myFunction` function, but you didn't provide a region, stage or a version number, so you'll be prompted for all these required options. After you make your choices your deployed function will roll back to the chosen version. ``` serverless function rollback myFunction -s prod -r us-east-1 -v 4 ``` In this example, you'll instantly rollback the `myFunction` function to version 4, in the `us-east-1` region in the `prod` stage.
``` serverless function rollback ``` **Must be run inside a Serverless Project.** This command lets you roll back a deployed function to a previous version. It takes the following options and parameters: * functionName (parameter): The name of the function you want to rollback. * `-s <stage>` The stage you want to rollback your function in. * `-r <region>` The AWS region you want to rollback your function in. Optional if your stage has only one region. * `-v <NUMBER>` The version you want to rollback your function to. * `-m <NUMBER>` Optional - Maximum number of versions to show in prompt. Default is 50. ### Examples ``` serverless function rollback myFunction ``` In this example, you're trying to rollback the `myFunction` function, but you didn't provide a region, stage or a version number, so you'll be prompted for all these required options. After you make your choices your deployed function will roll back to the chosen version. ``` serverless function rollback myFunction -s prod -r us-east-1 -v 4 ``` In this example, you'll instantly rollback the `myFunction` function to version 4, in the `us-east-1` region in the `prod` stage.
{"__v":5,"_id":"56dac0493dede50b00eacb64","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless function run\n```\n**Must be run inside a Serverless Project.** It runs your local or deployed function for testing using the `event.json` as a sample event. This command takes the following **parameters**:\n\n* functionName (parameter): The unique name of the function you want to run.\n* `-s <stage>` The stage you want to run your function in.\n* `-r <region>`The region you want to run your function in. Optional if your Stage has only one region.\n* `-l <BOOLEAN>` Show the log output. Optional.\n* `-i, <invocationType>` AWS lambda [InvocationType](http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax). Default value: `RequestResponse`.\n* `-d` Executes the deployed function. Must be specified or other parameters (-s / -r) are ignored. \n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Stage/Region Options are just for Env Vars\",\n  \"body\": \"The reason why you provide stage/region for function run regardless of whether you're running locally or remotely is to use the correct environment variable. So remember, if you provide a stage/region, that doesn't mean it'll run the function **remotely** in that stage/region. You have to pass the `-d` option to make that work.\"\n}\n[/block]\n### Examples\n```\nserverless function run\n```\nIf you don't provide a function name like in this example, the command will behave depending on where in your Project you're running it. If you're running outside of a function folder, it'll throw an error, if you're running in a function folder, you'll run this function.\n\n```\nserverless function run myFunction\n```\nIn this example, you'll **locally** run the `myFunction` function.\n\n```\nserverless function run myFunction -s dev -d\n```\nIn this example, you'll run the **deployed** `myFunction` function.","category":"56dac0483dede50b00eacb53","createdAt":"2015-12-02T14:18:30.694Z","excerpt":"Runs your local or deployed function for testing.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":13,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"function-run","sync_unique":"","title":"Function Run","type":"basic","updates":["56a8db6270a9440d00ef5fe2","56bdab94d1fb1323003fdaa1","570ffac951449a0e00132883"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Function Run

Runs your local or deployed function for testing.

``` serverless function run ``` **Must be run inside a Serverless Project.** It runs your local or deployed function for testing using the `event.json` as a sample event. This command takes the following **parameters**: * functionName (parameter): The unique name of the function you want to run. * `-s <stage>` The stage you want to run your function in. * `-r <region>`The region you want to run your function in. Optional if your Stage has only one region. * `-l <BOOLEAN>` Show the log output. Optional. * `-i, <invocationType>` AWS lambda [InvocationType](http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax). Default value: `RequestResponse`. * `-d` Executes the deployed function. Must be specified or other parameters (-s / -r) are ignored. [block:callout] { "type": "info", "title": "Stage/Region Options are just for Env Vars", "body": "The reason why you provide stage/region for function run regardless of whether you're running locally or remotely is to use the correct environment variable. So remember, if you provide a stage/region, that doesn't mean it'll run the function **remotely** in that stage/region. You have to pass the `-d` option to make that work." } [/block] ### Examples ``` serverless function run ``` If you don't provide a function name like in this example, the command will behave depending on where in your Project you're running it. If you're running outside of a function folder, it'll throw an error, if you're running in a function folder, you'll run this function. ``` serverless function run myFunction ``` In this example, you'll **locally** run the `myFunction` function. ``` serverless function run myFunction -s dev -d ``` In this example, you'll run the **deployed** `myFunction` function.
``` serverless function run ``` **Must be run inside a Serverless Project.** It runs your local or deployed function for testing using the `event.json` as a sample event. This command takes the following **parameters**: * functionName (parameter): The unique name of the function you want to run. * `-s <stage>` The stage you want to run your function in. * `-r <region>`The region you want to run your function in. Optional if your Stage has only one region. * `-l <BOOLEAN>` Show the log output. Optional. * `-i, <invocationType>` AWS lambda [InvocationType](http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax). Default value: `RequestResponse`. * `-d` Executes the deployed function. Must be specified or other parameters (-s / -r) are ignored. [block:callout] { "type": "info", "title": "Stage/Region Options are just for Env Vars", "body": "The reason why you provide stage/region for function run regardless of whether you're running locally or remotely is to use the correct environment variable. So remember, if you provide a stage/region, that doesn't mean it'll run the function **remotely** in that stage/region. You have to pass the `-d` option to make that work." } [/block] ### Examples ``` serverless function run ``` If you don't provide a function name like in this example, the command will behave depending on where in your Project you're running it. If you're running outside of a function folder, it'll throw an error, if you're running in a function folder, you'll run this function. ``` serverless function run myFunction ``` In this example, you'll **locally** run the `myFunction` function. ``` serverless function run myFunction -s dev -d ``` In this example, you'll run the **deployed** `myFunction` function.
{"__v":3,"_id":"56dac0493dede50b00eacb65","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless function logs\n```\n**Must be run inside a Serverless Project.** It fetches the lambda function logs from CloudWatch. This command takes the following options and parameters:\n​\n* functionName (parameter): The name of the function you want get logs for. Not required if you're running from a function folder.\n* `-s <stage>` The stage you want to get function logs from. Optional if your Project has only one stage.\n* `-r <region>` Optional - The AWS region you want to get function logs from. Optional if your Stage has only one region.\n* `-t <BOOLEAN>` Optional - Tail the log output. Default is false.\n* `-d <STRING>` Optional - The duration of time in which the log history is shown. Example values: `10m`, `2h`, `1d`, `10minutes`, `1day`.  Default: `5m`.\n* `-f <STRING>` Optional - A log filter pattern.\n* `-i <NUMBER>` Optional - Tail polling interval in milliseconds. Default: `1000`.\n​\n### Examples\n```\nserverless function logs\n```\nIf you don't provide a function name like in this example, the command will behave depending on where in your project you're running it. If you're running outside of a function folder, it'll throw an error, if you're running in a function folder, you'll get this function logs.\n​\n```\nserverless function myFunction -s production -r us-east-1\n```\nIn this example, you'll get the \"myFunction\" logs from the `us-east-1` region in the `prod` stage. \n​\n```\nserverless function logs myFunction -s prod -r us-east-1 -t -d 24h -f error\n​\n```\nIn this example, you'll get the \"myFunction\" logs which has a `error` substring for the last 24 hours and will get new logs until you stop the command execution.","category":"56dac0483dede50b00eacb53","createdAt":"2016-02-10T12:17:23.630Z","excerpt":"Fetches lambda function logs from CloudWatch","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":14,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"function-logs","sync_unique":"","title":"Function Logs","type":"basic","updates":["57372ea97ad8410e009a36d1"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Function Logs

Fetches lambda function logs from CloudWatch

``` serverless function logs ``` **Must be run inside a Serverless Project.** It fetches the lambda function logs from CloudWatch. This command takes the following options and parameters: ​ * functionName (parameter): The name of the function you want get logs for. Not required if you're running from a function folder. * `-s <stage>` The stage you want to get function logs from. Optional if your Project has only one stage. * `-r <region>` Optional - The AWS region you want to get function logs from. Optional if your Stage has only one region. * `-t <BOOLEAN>` Optional - Tail the log output. Default is false. * `-d <STRING>` Optional - The duration of time in which the log history is shown. Example values: `10m`, `2h`, `1d`, `10minutes`, `1day`. Default: `5m`. * `-f <STRING>` Optional - A log filter pattern. * `-i <NUMBER>` Optional - Tail polling interval in milliseconds. Default: `1000`. ​ ### Examples ``` serverless function logs ``` If you don't provide a function name like in this example, the command will behave depending on where in your project you're running it. If you're running outside of a function folder, it'll throw an error, if you're running in a function folder, you'll get this function logs. ​ ``` serverless function myFunction -s production -r us-east-1 ``` In this example, you'll get the "myFunction" logs from the `us-east-1` region in the `prod` stage. ​ ``` serverless function logs myFunction -s prod -r us-east-1 -t -d 24h -f error ​ ``` In this example, you'll get the "myFunction" logs which has a `error` substring for the last 24 hours and will get new logs until you stop the command execution.
``` serverless function logs ``` **Must be run inside a Serverless Project.** It fetches the lambda function logs from CloudWatch. This command takes the following options and parameters: ​ * functionName (parameter): The name of the function you want get logs for. Not required if you're running from a function folder. * `-s <stage>` The stage you want to get function logs from. Optional if your Project has only one stage. * `-r <region>` Optional - The AWS region you want to get function logs from. Optional if your Stage has only one region. * `-t <BOOLEAN>` Optional - Tail the log output. Default is false. * `-d <STRING>` Optional - The duration of time in which the log history is shown. Example values: `10m`, `2h`, `1d`, `10minutes`, `1day`. Default: `5m`. * `-f <STRING>` Optional - A log filter pattern. * `-i <NUMBER>` Optional - Tail polling interval in milliseconds. Default: `1000`. ​ ### Examples ``` serverless function logs ``` If you don't provide a function name like in this example, the command will behave depending on where in your project you're running it. If you're running outside of a function folder, it'll throw an error, if you're running in a function folder, you'll get this function logs. ​ ``` serverless function myFunction -s production -r us-east-1 ``` In this example, you'll get the "myFunction" logs from the `us-east-1` region in the `prod` stage. ​ ``` serverless function logs myFunction -s prod -r us-east-1 -t -d 24h -f error ​ ``` In this example, you'll get the "myFunction" logs which has a `error` substring for the last 24 hours and will get new logs until you stop the command execution.
{"__v":9,"_id":"56dac0493dede50b00eacb67","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless endpoint deploy\n```\n**Must be run inside a Serverless Project.** It deploys your endpoints to AWS. This command takes the following options and parameters:\n\n* endpointName (parameter): The name of your endpoint, which is the combination of your endpoint path and endpoint method (i.e. `users/create~GET`). This is how we identify endpoints in your project, because this combination is always unique.\n* `-s <stage>` The stage you want to deploy your endpoint to. Optional if your project has only one stage.\n* `-r <region>` Optional - The AWS region you want to deploy your endpoint to. If not provided, the endpoint will be deployed to **all** the region defined in your chosen stage by default.\n\n### Examples\n```\nserverless endpoint deploy\n```\nIf you don't provide an Endpoint name like in this example, the command will behave depending on where in your Project you're running it. If you're running in your project root directory, it'll deploy all the endpoints in your project, if you're running in a subfolder, it'll only deploy all the endpoints inside that subfolder, if you're running in a function, you'll deploy only the endpoints of that function.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Deploy Functions Before Endpoints\",\n  \"body\": \"If you try to deploy an endpoint that points to a lambda function that **wasn't yet deployed**, you'll get an error message from AWS. Once you deploy your Function at least once, you're free to deploy its endpoints anytime.\"\n}\n[/block]\n```\nserverless endpoint deploy 'users/create~POST' -s prod -r us-east-1\n```\nIn this example, you'll instantly deploy the endpoint with the path `users/create` and a method of `POST`. The endpoint will be deployed to the `us-east-1` region in the `prod` stage.\n\n```\nserverless endpoint deploy 'users/create~POST' users/list~GET\n```\nIn this example, you'll deploy the **two** provided endpoints. But first you'll be prompted to choose a stage if your Project has more than one stage, after that deployment to all regions in that stage will begin.","category":"56dac0483dede50b00eacb53","createdAt":"2015-12-08T17:23:09.211Z","excerpt":"Deploys an Endpoint to AWS API Gateway.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":15,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"endpoint-deploy","sync_unique":"","title":"Endpoint Deploy","type":"basic","updates":["56fea94680838b0e00db932b"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Endpoint Deploy

Deploys an Endpoint to AWS API Gateway.

``` serverless endpoint deploy ``` **Must be run inside a Serverless Project.** It deploys your endpoints to AWS. This command takes the following options and parameters: * endpointName (parameter): The name of your endpoint, which is the combination of your endpoint path and endpoint method (i.e. `users/create~GET`). This is how we identify endpoints in your project, because this combination is always unique. * `-s <stage>` The stage you want to deploy your endpoint to. Optional if your project has only one stage. * `-r <region>` Optional - The AWS region you want to deploy your endpoint to. If not provided, the endpoint will be deployed to **all** the region defined in your chosen stage by default. ### Examples ``` serverless endpoint deploy ``` If you don't provide an Endpoint name like in this example, the command will behave depending on where in your Project you're running it. If you're running in your project root directory, it'll deploy all the endpoints in your project, if you're running in a subfolder, it'll only deploy all the endpoints inside that subfolder, if you're running in a function, you'll deploy only the endpoints of that function. [block:callout] { "type": "warning", "title": "Deploy Functions Before Endpoints", "body": "If you try to deploy an endpoint that points to a lambda function that **wasn't yet deployed**, you'll get an error message from AWS. Once you deploy your Function at least once, you're free to deploy its endpoints anytime." } [/block] ``` serverless endpoint deploy 'users/create~POST' -s prod -r us-east-1 ``` In this example, you'll instantly deploy the endpoint with the path `users/create` and a method of `POST`. The endpoint will be deployed to the `us-east-1` region in the `prod` stage. ``` serverless endpoint deploy 'users/create~POST' users/list~GET ``` In this example, you'll deploy the **two** provided endpoints. But first you'll be prompted to choose a stage if your Project has more than one stage, after that deployment to all regions in that stage will begin.
``` serverless endpoint deploy ``` **Must be run inside a Serverless Project.** It deploys your endpoints to AWS. This command takes the following options and parameters: * endpointName (parameter): The name of your endpoint, which is the combination of your endpoint path and endpoint method (i.e. `users/create~GET`). This is how we identify endpoints in your project, because this combination is always unique. * `-s <stage>` The stage you want to deploy your endpoint to. Optional if your project has only one stage. * `-r <region>` Optional - The AWS region you want to deploy your endpoint to. If not provided, the endpoint will be deployed to **all** the region defined in your chosen stage by default. ### Examples ``` serverless endpoint deploy ``` If you don't provide an Endpoint name like in this example, the command will behave depending on where in your Project you're running it. If you're running in your project root directory, it'll deploy all the endpoints in your project, if you're running in a subfolder, it'll only deploy all the endpoints inside that subfolder, if you're running in a function, you'll deploy only the endpoints of that function. [block:callout] { "type": "warning", "title": "Deploy Functions Before Endpoints", "body": "If you try to deploy an endpoint that points to a lambda function that **wasn't yet deployed**, you'll get an error message from AWS. Once you deploy your Function at least once, you're free to deploy its endpoints anytime." } [/block] ``` serverless endpoint deploy 'users/create~POST' -s prod -r us-east-1 ``` In this example, you'll instantly deploy the endpoint with the path `users/create` and a method of `POST`. The endpoint will be deployed to the `us-east-1` region in the `prod` stage. ``` serverless endpoint deploy 'users/create~POST' users/list~GET ``` In this example, you'll deploy the **two** provided endpoints. But first you'll be prompted to choose a stage if your Project has more than one stage, after that deployment to all regions in that stage will begin.
{"__v":2,"_id":"56ddb1c828924f200028f359","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"```\nserverless endpoint remove\n```\n**Must be run inside a Serverless Project.** It removes deployed endpoints from your AWS account based on the provided stage/region. It takes the following options and parameters:\n\n* functionNames (parameter): The names of the endpoints you want to remove, which is just the combination of endpoints path and method.\n* `-s <stage>` the stage you want to remove endpoints from.\n* `-r <region>` the region in your chosen stage you want to remove endpoints from.\n* `-a <BOOLEAN>` **Optional** - removes all endpoints from your AWS account. Default is false.\n\n### Examples\n```\nserverless endpoint remove user/create~POST\n```\nIn this example, you'll be prompted to choose a stage and region to remove your endpoints from. After that the endpoint with the path `user/create` and method `POST` will be removed from your AWS account.\n\n```\nserverless endpoint remove user/create~POST user/list~GET -s prod -r us-east-1\n```\nIn this example, you'll instantly remove the **two** provided endpoints from the `us-east-1` region in the `prod` stage.\n\n```\nserverless endpoint remove --all -s prod -r us-east-1\n```\nIn this example, you'll instantly remove all the endpoints in your project from the `us-east-1` region in the `prod` stage. Super dangerous!","category":"56dac0483dede50b00eacb53","createdAt":"2016-03-07T16:52:24.119Z","excerpt":"Removes deployed endpoints from your AWS account based on the provided stage/region.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":16,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"endpoint-remove","sync_unique":"","title":"Endpoint Remove","type":"basic","updates":["576d3be7a39bbf0e00db52bd"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Endpoint Remove

Removes deployed endpoints from your AWS account based on the provided stage/region.

``` serverless endpoint remove ``` **Must be run inside a Serverless Project.** It removes deployed endpoints from your AWS account based on the provided stage/region. It takes the following options and parameters: * functionNames (parameter): The names of the endpoints you want to remove, which is just the combination of endpoints path and method. * `-s <stage>` the stage you want to remove endpoints from. * `-r <region>` the region in your chosen stage you want to remove endpoints from. * `-a <BOOLEAN>` **Optional** - removes all endpoints from your AWS account. Default is false. ### Examples ``` serverless endpoint remove user/create~POST ``` In this example, you'll be prompted to choose a stage and region to remove your endpoints from. After that the endpoint with the path `user/create` and method `POST` will be removed from your AWS account. ``` serverless endpoint remove user/create~POST user/list~GET -s prod -r us-east-1 ``` In this example, you'll instantly remove the **two** provided endpoints from the `us-east-1` region in the `prod` stage. ``` serverless endpoint remove --all -s prod -r us-east-1 ``` In this example, you'll instantly remove all the endpoints in your project from the `us-east-1` region in the `prod` stage. Super dangerous!
``` serverless endpoint remove ``` **Must be run inside a Serverless Project.** It removes deployed endpoints from your AWS account based on the provided stage/region. It takes the following options and parameters: * functionNames (parameter): The names of the endpoints you want to remove, which is just the combination of endpoints path and method. * `-s <stage>` the stage you want to remove endpoints from. * `-r <region>` the region in your chosen stage you want to remove endpoints from. * `-a <BOOLEAN>` **Optional** - removes all endpoints from your AWS account. Default is false. ### Examples ``` serverless endpoint remove user/create~POST ``` In this example, you'll be prompted to choose a stage and region to remove your endpoints from. After that the endpoint with the path `user/create` and method `POST` will be removed from your AWS account. ``` serverless endpoint remove user/create~POST user/list~GET -s prod -r us-east-1 ``` In this example, you'll instantly remove the **two** provided endpoints from the `us-east-1` region in the `prod` stage. ``` serverless endpoint remove --all -s prod -r us-east-1 ``` In this example, you'll instantly remove all the endpoints in your project from the `us-east-1` region in the `prod` stage. Super dangerous!
{"__v":5,"_id":"56dac0493dede50b00eacb68","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless event deploy\n```\n**Must be run inside a Serverless Project.** It deploys your events to AWS. This command takes the following options and parameters:\n\n* eventNames (parameter): The names of the events you want to deploy.\n* `-s <stage>` The stage you want to deploy your event to. Optional if your project has only one stage.\n* `-r <region>` Optional - The AWS region you want to deploy your event to. If not provided, the endpoint will be deployed to **all** the region defined in your chosen stage by default.\n[block:callout]\n{\n  \"type\": \"danger\",\n  \"title\": \"Event Sources Permissions\",\n  \"body\": \"For event sources that are following the **push model** (S3, SNS & Schedule), we create all the required permissions to invoke the lambda for you, so deploying will work the right away out of the box. But for the event sources that are following the **pull model** (DynamoDB & Kinesis Streams), you have to give your lambda function permission to access DynamoDB/Kinesis before deploying the event, otherwise deployment will fail.\"\n}\n[/block]\n### Examples\n```\nserverless event deploy\n```\nIf you don't provide an event path like in this example, the command will behave depending on where in your project you're running it. If you're running in your project root directory, it'll deploy all events in your project, if you're running in a subfolder, it'll only deploy all the events inside that subfolder, if you're running in a function, you'll deploy only the events of that function.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"body\": \"If you try to deploy an event that is associated to a lambda function that **wasn't yet deployed**, you'll get an error message from AWS. Once you deploy your Function at least once, you're free to deploy its events anytime.\",\n  \"title\": \"Deploy Functions Before Events\"\n}\n[/block]\n```\nserverless event deploy myEvent -s prod -r us-east-1\n```\nIn this example, you'll instantly deploy the event named `myEvent` that is in the function `myFunction`. The event will be deployed to the `us-east-1` region in the `prod` stage.\n\n```\nserverless event deploy myEvent myOtherEvent\n```\nIn this example, you'll deploy the **two** events `myEvent` and `myOtherEvent` because you provided two event names. But first you'll be prompted to choose a stage if your project has more than one stage, after that deployment to all regions in that stage will begin.","category":"56dac0483dede50b00eacb53","createdAt":"2016-02-10T09:57:33.026Z","excerpt":"Deploys event sources for your lambda.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":17,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"event-deploy","sync_unique":"","title":"Event Deploy","type":"basic","updates":["56f2e6ce4a8dae0e009ab323"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Event Deploy

Deploys event sources for your lambda.

``` serverless event deploy ``` **Must be run inside a Serverless Project.** It deploys your events to AWS. This command takes the following options and parameters: * eventNames (parameter): The names of the events you want to deploy. * `-s <stage>` The stage you want to deploy your event to. Optional if your project has only one stage. * `-r <region>` Optional - The AWS region you want to deploy your event to. If not provided, the endpoint will be deployed to **all** the region defined in your chosen stage by default. [block:callout] { "type": "danger", "title": "Event Sources Permissions", "body": "For event sources that are following the **push model** (S3, SNS & Schedule), we create all the required permissions to invoke the lambda for you, so deploying will work the right away out of the box. But for the event sources that are following the **pull model** (DynamoDB & Kinesis Streams), you have to give your lambda function permission to access DynamoDB/Kinesis before deploying the event, otherwise deployment will fail." } [/block] ### Examples ``` serverless event deploy ``` If you don't provide an event path like in this example, the command will behave depending on where in your project you're running it. If you're running in your project root directory, it'll deploy all events in your project, if you're running in a subfolder, it'll only deploy all the events inside that subfolder, if you're running in a function, you'll deploy only the events of that function. [block:callout] { "type": "warning", "body": "If you try to deploy an event that is associated to a lambda function that **wasn't yet deployed**, you'll get an error message from AWS. Once you deploy your Function at least once, you're free to deploy its events anytime.", "title": "Deploy Functions Before Events" } [/block] ``` serverless event deploy myEvent -s prod -r us-east-1 ``` In this example, you'll instantly deploy the event named `myEvent` that is in the function `myFunction`. The event will be deployed to the `us-east-1` region in the `prod` stage. ``` serverless event deploy myEvent myOtherEvent ``` In this example, you'll deploy the **two** events `myEvent` and `myOtherEvent` because you provided two event names. But first you'll be prompted to choose a stage if your project has more than one stage, after that deployment to all regions in that stage will begin.
``` serverless event deploy ``` **Must be run inside a Serverless Project.** It deploys your events to AWS. This command takes the following options and parameters: * eventNames (parameter): The names of the events you want to deploy. * `-s <stage>` The stage you want to deploy your event to. Optional if your project has only one stage. * `-r <region>` Optional - The AWS region you want to deploy your event to. If not provided, the endpoint will be deployed to **all** the region defined in your chosen stage by default. [block:callout] { "type": "danger", "title": "Event Sources Permissions", "body": "For event sources that are following the **push model** (S3, SNS & Schedule), we create all the required permissions to invoke the lambda for you, so deploying will work the right away out of the box. But for the event sources that are following the **pull model** (DynamoDB & Kinesis Streams), you have to give your lambda function permission to access DynamoDB/Kinesis before deploying the event, otherwise deployment will fail." } [/block] ### Examples ``` serverless event deploy ``` If you don't provide an event path like in this example, the command will behave depending on where in your project you're running it. If you're running in your project root directory, it'll deploy all events in your project, if you're running in a subfolder, it'll only deploy all the events inside that subfolder, if you're running in a function, you'll deploy only the events of that function. [block:callout] { "type": "warning", "body": "If you try to deploy an event that is associated to a lambda function that **wasn't yet deployed**, you'll get an error message from AWS. Once you deploy your Function at least once, you're free to deploy its events anytime.", "title": "Deploy Functions Before Events" } [/block] ``` serverless event deploy myEvent -s prod -r us-east-1 ``` In this example, you'll instantly deploy the event named `myEvent` that is in the function `myFunction`. The event will be deployed to the `us-east-1` region in the `prod` stage. ``` serverless event deploy myEvent myOtherEvent ``` In this example, you'll deploy the **two** events `myEvent` and `myOtherEvent` because you provided two event names. But first you'll be prompted to choose a stage if your project has more than one stage, after that deployment to all regions in that stage will begin.
{"__v":2,"_id":"56ddb489e334b8170069b681","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"```\nserverless event deploy\n```\n**Must be run inside a Serverless Project.** it removes event sources from your AWS account based on the provided stage/region.. This command takes the following options and parameters:\n\n* eventNames (parameter): The names of the events you want to remove.\n* `-s <stage>` The stage you want to remove your events from. Optional if your project has only one stage.\n* `-r <region>` Optional - The AWS region you want to remove your events from. If not provided, the event will be removed from **all** the region defined in your chosen stage by default.\n* `-a <BOOLEAN>` **Optional** - removes all events from your AWS account. Default is false.\n\n### Examples\n```\nserverless event remove myEvent\n```\nIn this example, you'll be prompted to choose a stage and region to remove your events from. After that the `myEvent` event will be removed from your AWS account.\n\n```\nserverless event remove myEvent myOtherEvent -s prod -r us-east-1\n```\nIn this example, you'll instantly remove the **two** events `myEvent` and `myOtherEvent` from the `us-east-1` region in the `prod` stage.\n\n```\nserverless event remove --all -s prod -r us-east-1\n```\nIn this example, you'll instantly remove all the events in your project from the `us-east-1` region in the `prod` stage.","category":"56dac0483dede50b00eacb53","createdAt":"2016-03-07T17:04:09.941Z","excerpt":"Removes event sources from your AWS account based on the provided stage/region.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":18,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"event-remove","sync_unique":"","title":"Event Remove","type":"basic","updates":["574b16d6f2070e17000fc88f","5770e9532659e20e00c90ac8"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Event Remove

Removes event sources from your AWS account based on the provided stage/region.

``` serverless event deploy ``` **Must be run inside a Serverless Project.** it removes event sources from your AWS account based on the provided stage/region.. This command takes the following options and parameters: * eventNames (parameter): The names of the events you want to remove. * `-s <stage>` The stage you want to remove your events from. Optional if your project has only one stage. * `-r <region>` Optional - The AWS region you want to remove your events from. If not provided, the event will be removed from **all** the region defined in your chosen stage by default. * `-a <BOOLEAN>` **Optional** - removes all events from your AWS account. Default is false. ### Examples ``` serverless event remove myEvent ``` In this example, you'll be prompted to choose a stage and region to remove your events from. After that the `myEvent` event will be removed from your AWS account. ``` serverless event remove myEvent myOtherEvent -s prod -r us-east-1 ``` In this example, you'll instantly remove the **two** events `myEvent` and `myOtherEvent` from the `us-east-1` region in the `prod` stage. ``` serverless event remove --all -s prod -r us-east-1 ``` In this example, you'll instantly remove all the events in your project from the `us-east-1` region in the `prod` stage.
``` serverless event deploy ``` **Must be run inside a Serverless Project.** it removes event sources from your AWS account based on the provided stage/region.. This command takes the following options and parameters: * eventNames (parameter): The names of the events you want to remove. * `-s <stage>` The stage you want to remove your events from. Optional if your project has only one stage. * `-r <region>` Optional - The AWS region you want to remove your events from. If not provided, the event will be removed from **all** the region defined in your chosen stage by default. * `-a <BOOLEAN>` **Optional** - removes all events from your AWS account. Default is false. ### Examples ``` serverless event remove myEvent ``` In this example, you'll be prompted to choose a stage and region to remove your events from. After that the `myEvent` event will be removed from your AWS account. ``` serverless event remove myEvent myOtherEvent -s prod -r us-east-1 ``` In this example, you'll instantly remove the **two** events `myEvent` and `myOtherEvent` from the `us-east-1` region in the `prod` stage. ``` serverless event remove --all -s prod -r us-east-1 ``` In this example, you'll instantly remove all the events in your project from the `us-east-1` region in the `prod` stage.
{"__v":1,"_id":"56dac0493dede50b00eacb6b","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless resources deploy\n```\n**Must be run inside a Serverless Project.** It updates your AWS resources by collecting all the resources defined in your `project/s-resources-cf.json` file and populates all the referenced templates and variables. It takes the following options:\n\n* `-s <stage>` the stage you want to deploy resources to\n* `-r <region>` the region in your chosen stage you want to deploy resources to\n* `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**.\n\n### Examples\n```\nserverless resources deploy\n```\nIn this example, you'll be prompted to choose a stage and region to deploy your resources to. After that it'll deploy the resources to the chosen stage and region.\n\n```\nserverless resources deploy -s dev -r us-east-1\n```\nIn this example, you'll instantly deploy your resources to the `us-east-1` region in the `dev` stage.\n\n```\nserverless resources deploy -c -s dev -r us-east-1\n```\nIn this example, you won't deploy the CF template file to AWS. It'll only generate a CF template file called `s-resources-cf-dev-useast1.json` inside the `_meta/resources` folder. You'll have to upload this file manually using the AWS console to deploy your resources.","category":"56dac0483dede50b00eacb53","createdAt":"2015-12-02T14:20:47.117Z","excerpt":"Deploys all your project's and modules's CloudFormation resources.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":19,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"resources-deploy","sync_unique":"","title":"Resources Deploy","type":"basic","updates":["56c50bebd1b8770d00922287"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Resources Deploy

Deploys all your project's and modules's CloudFormation resources.

``` serverless resources deploy ``` **Must be run inside a Serverless Project.** It updates your AWS resources by collecting all the resources defined in your `project/s-resources-cf.json` file and populates all the referenced templates and variables. It takes the following options: * `-s <stage>` the stage you want to deploy resources to * `-r <region>` the region in your chosen stage you want to deploy resources to * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless resources deploy ``` In this example, you'll be prompted to choose a stage and region to deploy your resources to. After that it'll deploy the resources to the chosen stage and region. ``` serverless resources deploy -s dev -r us-east-1 ``` In this example, you'll instantly deploy your resources to the `us-east-1` region in the `dev` stage. ``` serverless resources deploy -c -s dev -r us-east-1 ``` In this example, you won't deploy the CF template file to AWS. It'll only generate a CF template file called `s-resources-cf-dev-useast1.json` inside the `_meta/resources` folder. You'll have to upload this file manually using the AWS console to deploy your resources.
``` serverless resources deploy ``` **Must be run inside a Serverless Project.** It updates your AWS resources by collecting all the resources defined in your `project/s-resources-cf.json` file and populates all the referenced templates and variables. It takes the following options: * `-s <stage>` the stage you want to deploy resources to * `-r <region>` the region in your chosen stage you want to deploy resources to * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless resources deploy ``` In this example, you'll be prompted to choose a stage and region to deploy your resources to. After that it'll deploy the resources to the chosen stage and region. ``` serverless resources deploy -s dev -r us-east-1 ``` In this example, you'll instantly deploy your resources to the `us-east-1` region in the `dev` stage. ``` serverless resources deploy -c -s dev -r us-east-1 ``` In this example, you won't deploy the CF template file to AWS. It'll only generate a CF template file called `s-resources-cf-dev-useast1.json` inside the `_meta/resources` folder. You'll have to upload this file manually using the AWS console to deploy your resources.
{"__v":1,"_id":"56dac0493dede50b00eacb6c","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"```\nserverless resources remove\n```\n**Must be run within a Serverless Project**. Removes CloudFormation resources from a given stage/region in your Serverless project. It takes the following options:\n\n* `-s <stage>` the stage that contains the region you want to remove resources from.\n* `-r <region>` the region you want to remove resources from.\n* `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**.\n\n### Examples\n```\nserverless resources remove -s prod -r us-west-2\n```\nIn this example, the command will instantly remove CloudFormation resources from the `us-west-2` region in the `prod` stage.\n\n```\nserverless resources remove -s prod -r us-west-2 -c\n```\nIn this example, you've set the `-c` option to `true`, so the command will only output a CF template in the `_meta/resources` folder that you can execute manually on the AWS console to remove the resources.","category":"56dac0483dede50b00eacb53","createdAt":"2016-02-03T03:55:29.312Z","excerpt":"Removes CloudFormation resources from a given stage/region in your Serverless Project","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":20,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"resources-remove","sync_unique":"","title":"Resources Remove","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Resources Remove

Removes CloudFormation resources from a given stage/region in your Serverless Project

``` serverless resources remove ``` **Must be run within a Serverless Project**. Removes CloudFormation resources from a given stage/region in your Serverless project. It takes the following options: * `-s <stage>` the stage that contains the region you want to remove resources from. * `-r <region>` the region you want to remove resources from. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless resources remove -s prod -r us-west-2 ``` In this example, the command will instantly remove CloudFormation resources from the `us-west-2` region in the `prod` stage. ``` serverless resources remove -s prod -r us-west-2 -c ``` In this example, you've set the `-c` option to `true`, so the command will only output a CF template in the `_meta/resources` folder that you can execute manually on the AWS console to remove the resources.
``` serverless resources remove ``` **Must be run within a Serverless Project**. Removes CloudFormation resources from a given stage/region in your Serverless project. It takes the following options: * `-s <stage>` the stage that contains the region you want to remove resources from. * `-r <region>` the region you want to remove resources from. * `-c <BOOLEAN>` **Optional** - Doesn't execute CloudFormation if true. Default is **false**. ### Examples ``` serverless resources remove -s prod -r us-west-2 ``` In this example, the command will instantly remove CloudFormation resources from the `us-west-2` region in the `prod` stage. ``` serverless resources remove -s prod -r us-west-2 -c ``` In this example, you've set the `-c` option to `true`, so the command will only output a CF template in the `_meta/resources` folder that you can execute manually on the AWS console to remove the resources.
{"__v":1,"_id":"56ddad7368dd152900e6386c","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"```\nserverless resources diff\n```\n**Must be run inside a Serverless Project.** It outputs the different between your deployed resources and the resources currently defined in your project. It takes the following options:\n\n* `-s <stage>` the stage you want to deploy resources to\n* `-r <region>` the region in your chosen stage you want to deploy resources to\n* `-j <BOOLEAN>` **Optional** - Output unformatted JSON. Default is false.\n\n```\nserverless resources diff\n```\nIn this example, you'll be prompted to choose a stage and region to fetch your deployed resources. After that it'll output the difference between your deployed resources and the resources currently defined in your project.\n\n```\nserverless resources diff -s prod -r us-east-1 -j\n```\nIn this example, you'll instantly see the difference between your deployed resources and the resources currently defined in your project without any prompts, and because you provided the `-j` option, you'll see an unformatted JSON output.","category":"56dac0483dede50b00eacb53","createdAt":"2016-03-07T16:33:55.461Z","excerpt":"Outputs the diff between your deployed resources and the resources currently defined in your project.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":21,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"resources-diff","sync_unique":"","title":"Resources Diff","type":"basic","updates":["576d3f6179f35917002dbfaa"],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Resources Diff

Outputs the diff between your deployed resources and the resources currently defined in your project.

``` serverless resources diff ``` **Must be run inside a Serverless Project.** It outputs the different between your deployed resources and the resources currently defined in your project. It takes the following options: * `-s <stage>` the stage you want to deploy resources to * `-r <region>` the region in your chosen stage you want to deploy resources to * `-j <BOOLEAN>` **Optional** - Output unformatted JSON. Default is false. ``` serverless resources diff ``` In this example, you'll be prompted to choose a stage and region to fetch your deployed resources. After that it'll output the difference between your deployed resources and the resources currently defined in your project. ``` serverless resources diff -s prod -r us-east-1 -j ``` In this example, you'll instantly see the difference between your deployed resources and the resources currently defined in your project without any prompts, and because you provided the `-j` option, you'll see an unformatted JSON output.
``` serverless resources diff ``` **Must be run inside a Serverless Project.** It outputs the different between your deployed resources and the resources currently defined in your project. It takes the following options: * `-s <stage>` the stage you want to deploy resources to * `-r <region>` the region in your chosen stage you want to deploy resources to * `-j <BOOLEAN>` **Optional** - Output unformatted JSON. Default is false. ``` serverless resources diff ``` In this example, you'll be prompted to choose a stage and region to fetch your deployed resources. After that it'll output the difference between your deployed resources and the resources currently defined in your project. ``` serverless resources diff -s prod -r us-east-1 -j ``` In this example, you'll instantly see the difference between your deployed resources and the resources currently defined in your project without any prompts, and because you provided the `-j` option, you'll see an unformatted JSON output.
{"__v":2,"_id":"56dac0493dede50b00eacb69","api":{"auth":"required","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":""},"body":"```\nserverless dash deploy\n```\n**Must be run inside a Serverless Project.** An interactive CLI dashboard that makes it easy to select and deploy functions, endpoints and events concurrently. This command is intended to offer great user experience so it can only be used interactively.\n\nThis command will prompt you to choose a stage and region for your deployment, and it'll list all the functions and their endpoints and events. You just select whatever you want to deploy, and hit `Deploy`. If you run this command in the root directory of your project, it'll list all functions, endpoints and events in your Project. If you run it inside a subfolder, it'll only list the functions, endpoints and events inside that subfolder, and if you run it inside a function, it'll only list that function, its endpoints and its events.","category":"56dac0483dede50b00eacb53","createdAt":"2015-12-21T10:20:27.764Z","excerpt":"Prompts the deployment dashboard for functions and endpoints.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":22,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"dash-deploy","sync_unique":"","title":"Dash Deploy","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Dash Deploy

Prompts the deployment dashboard for functions and endpoints.

``` serverless dash deploy ``` **Must be run inside a Serverless Project.** An interactive CLI dashboard that makes it easy to select and deploy functions, endpoints and events concurrently. This command is intended to offer great user experience so it can only be used interactively. This command will prompt you to choose a stage and region for your deployment, and it'll list all the functions and their endpoints and events. You just select whatever you want to deploy, and hit `Deploy`. If you run this command in the root directory of your project, it'll list all functions, endpoints and events in your Project. If you run it inside a subfolder, it'll only list the functions, endpoints and events inside that subfolder, and if you run it inside a function, it'll only list that function, its endpoints and its events.
``` serverless dash deploy ``` **Must be run inside a Serverless Project.** An interactive CLI dashboard that makes it easy to select and deploy functions, endpoints and events concurrently. This command is intended to offer great user experience so it can only be used interactively. This command will prompt you to choose a stage and region for your deployment, and it'll list all the functions and their endpoints and events. You just select whatever you want to deploy, and hit `Deploy`. If you run this command in the root directory of your project, it'll list all functions, endpoints and events in your Project. If you run it inside a subfolder, it'll only list the functions, endpoints and events inside that subfolder, and if you run it inside a function, it'll only list that function, its endpoints and its events.
{"__v":3,"_id":"56dac0493dede50b00eacb6a","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"```\nserverless dash summary\n```\n**Must be run inside a Serverless project.** It displays a summary of your Serverless project, number of stages, regions, functions, endpoints and events. This simple command doesn't take any options or parameters.","category":"56dac0483dede50b00eacb53","createdAt":"2016-02-03T04:33:08.572Z","excerpt":"Displays a summary of your Serverless Project state.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":23,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"dash-summary","sync_unique":"","title":"Dash Summary","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Dash Summary

Displays a summary of your Serverless Project state.

``` serverless dash summary ``` **Must be run inside a Serverless project.** It displays a summary of your Serverless project, number of stages, regions, functions, endpoints and events. This simple command doesn't take any options or parameters.
``` serverless dash summary ``` **Must be run inside a Serverless project.** It displays a summary of your Serverless project, number of stages, regions, functions, endpoints and events. This simple command doesn't take any options or parameters.
{"__v":10,"_id":"56dacaf26b57660b0000eb1e","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"One of our core principles is to make the Framework completely extensible. To stick to that principle, we have designed the entire framework so developers can modify or extend everything that it does.\n\nLike you've seen in the CLI Reference, all of the functionality in the Serverless Framework is divided into Actions. For example, the functionality to create a new Serverless project exists in an action, and creating a new stage in that project exists in a separate action. If an operation is long or complex, like deploying a Lambda function, it is divided into multiple actions so developers can modify parts of the process, instead of the whole process. Actions can also call other actions. For example, the ProjectCreate action calls StageCreate and RegionCreate actions.\n\nEven better, Serverless also features Hooks which allow you to add functions that run before and after an individual action. Like we said, extensibility is one of our core principles. To add custom actions and hooks, make a **Serverless Plugin**. Plugins and actions are basically the same thing, but we refer to the core functionality as actions, and custom functionality as plugins. \n\nThe following sections of the docs will walk you through how to create a custom action, or a plugin, and make use of the Serverless API. If you're curious how action/plugin files look like, take a look at our [Actions folder in our codebase](https://github.com/serverless/serverless/tree/master/lib/actions). It includes all the functionality that was described in the CLI Reference section. Once you take a look at some action files, you'll notice a pattern for creating actions/plugins. The plugin that you will create will look very similar. So always refer to those default action files while developing your plugin as an example if you ever get stuck.\n\nEach of these classes contain lots of methods to help you manipulate your project. Usually you won't have to init these classes yourself, but an instance will be returned to you by using certain methods (i.e. `_this.S.getProject()` returns a Project class instance). In the following sections we'll explore each of these classes and methods in detail, starting with the main Serverless class.\n\n## Installing Plugins\nYou can extend the Serverless Framework through Serverless Plugins that have been authored by our community.  They are packaged as *npm* modules.\n\nRight now, the available plugins are listed in the Serverless Framework's README file (we are working on making discovery easier).  To install them, find their npm names and follow these steps:\n\n* Go to the root of your Serverless Project\n* Run `npm install <plugin> --save`\n* In your Project's `s-project.json`, in the `plugins` property, add the npm name of your recently added plugin to the array, like this:\n```\nplugins: [ \n     \"serverless-optimizer-plugin\"\n]\n```","category":"56dac0483dede50b00eacb54","createdAt":"2016-03-05T12:02:58.695Z","excerpt":"An overview of Serverless Plugins, and how to extend the framework functionality.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":0,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"plugins","sync_unique":"","title":"Plugins Overview","type":"basic","updates":[],"user":"562120887c515c0d008eee9b","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Plugins Overview

An overview of Serverless Plugins, and how to extend the framework functionality.

One of our core principles is to make the Framework completely extensible. To stick to that principle, we have designed the entire framework so developers can modify or extend everything that it does. Like you've seen in the CLI Reference, all of the functionality in the Serverless Framework is divided into Actions. For example, the functionality to create a new Serverless project exists in an action, and creating a new stage in that project exists in a separate action. If an operation is long or complex, like deploying a Lambda function, it is divided into multiple actions so developers can modify parts of the process, instead of the whole process. Actions can also call other actions. For example, the ProjectCreate action calls StageCreate and RegionCreate actions. Even better, Serverless also features Hooks which allow you to add functions that run before and after an individual action. Like we said, extensibility is one of our core principles. To add custom actions and hooks, make a **Serverless Plugin**. Plugins and actions are basically the same thing, but we refer to the core functionality as actions, and custom functionality as plugins. The following sections of the docs will walk you through how to create a custom action, or a plugin, and make use of the Serverless API. If you're curious how action/plugin files look like, take a look at our [Actions folder in our codebase](https://github.com/serverless/serverless/tree/master/lib/actions). It includes all the functionality that was described in the CLI Reference section. Once you take a look at some action files, you'll notice a pattern for creating actions/plugins. The plugin that you will create will look very similar. So always refer to those default action files while developing your plugin as an example if you ever get stuck. Each of these classes contain lots of methods to help you manipulate your project. Usually you won't have to init these classes yourself, but an instance will be returned to you by using certain methods (i.e. `_this.S.getProject()` returns a Project class instance). In the following sections we'll explore each of these classes and methods in detail, starting with the main Serverless class. ## Installing Plugins You can extend the Serverless Framework through Serverless Plugins that have been authored by our community. They are packaged as *npm* modules. Right now, the available plugins are listed in the Serverless Framework's README file (we are working on making discovery easier). To install them, find their npm names and follow these steps: * Go to the root of your Serverless Project * Run `npm install <plugin> --save` * In your Project's `s-project.json`, in the `plugins` property, add the npm name of your recently added plugin to the array, like this: ``` plugins: [ "serverless-optimizer-plugin" ] ```
One of our core principles is to make the Framework completely extensible. To stick to that principle, we have designed the entire framework so developers can modify or extend everything that it does. Like you've seen in the CLI Reference, all of the functionality in the Serverless Framework is divided into Actions. For example, the functionality to create a new Serverless project exists in an action, and creating a new stage in that project exists in a separate action. If an operation is long or complex, like deploying a Lambda function, it is divided into multiple actions so developers can modify parts of the process, instead of the whole process. Actions can also call other actions. For example, the ProjectCreate action calls StageCreate and RegionCreate actions. Even better, Serverless also features Hooks which allow you to add functions that run before and after an individual action. Like we said, extensibility is one of our core principles. To add custom actions and hooks, make a **Serverless Plugin**. Plugins and actions are basically the same thing, but we refer to the core functionality as actions, and custom functionality as plugins. The following sections of the docs will walk you through how to create a custom action, or a plugin, and make use of the Serverless API. If you're curious how action/plugin files look like, take a look at our [Actions folder in our codebase](https://github.com/serverless/serverless/tree/master/lib/actions). It includes all the functionality that was described in the CLI Reference section. Once you take a look at some action files, you'll notice a pattern for creating actions/plugins. The plugin that you will create will look very similar. So always refer to those default action files while developing your plugin as an example if you ever get stuck. Each of these classes contain lots of methods to help you manipulate your project. Usually you won't have to init these classes yourself, but an instance will be returned to you by using certain methods (i.e. `_this.S.getProject()` returns a Project class instance). In the following sections we'll explore each of these classes and methods in detail, starting with the main Serverless class. ## Installing Plugins You can extend the Serverless Framework through Serverless Plugins that have been authored by our community. They are packaged as *npm* modules. Right now, the available plugins are listed in the Serverless Framework's README file (we are working on making discovery easier). To install them, find their npm names and follow these steps: * Go to the root of your Serverless Project * Run `npm install <plugin> --save` * In your Project's `s-project.json`, in the `plugins` property, add the npm name of your recently added plugin to the array, like this: ``` plugins: [ "serverless-optimizer-plugin" ] ```
{"__v":4,"_id":"56dac04a3dede50b00eacb7a","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"The first step for creating a new plugin is to create the initial boilerplate by running the following command in the root directory of your project:\n\n```\nserverless plugin create\n```\n\nThis command will ask you for a plugin name, and then create a new folder in the root directory of your Serverless project called `plugins` if it doesn't already exist. It will also create a subfolder inside that `plugins` folder with the name you provided, along with some boilerplate files for the plugin. The most important file that will be generated is the `index.js`. This is where you'll be developing your plugin. If you open this `index.js` file you'll find some starter code and helpful comments to get you started. You can check the full file with all the comments by [clicking here](https://github.com/serverless/serverless/blob/master/lib/templates/plugin/index.js). Here's a simpler version:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"'use strict';\\n\\n\\nconst path  = require('path'),\\n  fs        = require('fs'),\\n  BbPromise = require('bluebird'); // Serverless uses Bluebird Promises and we recommend you do to because they provide more than your average Promise :)\\n\\nmodule.exports = function(S) { // Always pass in the ServerlessPlugin Class\\n\\n  /**\\n   * Adding/Manipulating Serverless classes\\n   * - You can add or manipulate Serverless classes like this\\n   */\\n\\n  S.classes.Project.newStaticMethod     = function() { console.log(\\\"A new method!\\\"); };\\n  S.classes.Project.prototype.newMethod = function() { S.classes.Project.newStaticMethod(); };\\n\\n  /**\\n   * Extending the Plugin Class\\n   * - Here is how you can add custom Actions and Hooks to Serverless.\\n   * - This class is only required if you want to add Actions and Hooks.\\n   */\\n\\n  class PluginBoilerplate extends S.classes.Plugin {\\n\\n    constructor() {\\n      super();\\n      this.name = 'myPlugin';\\n    }\\n\\n    registerActions() {\\n\\n      S.addAction(this._customAction.bind(this), {\\n        handler:       'customAction',\\n        description:   'A custom action from a custom plugin',\\n        context:       'custom',\\n        contextAction: 'run',\\n        options:       [{ \\n          option:      'option',\\n          shortcut:    'o',\\n          description: 'test option 1'\\n        }],\\n        parameters: [ \\n          {\\n            parameter: 'paths',\\n            description: 'One or multiple paths to your function',\\n            position: '0->'\\n          }\\n        ]\\n      });\\n\\n      return BbPromise.resolve();\\n    }\\n\\n    registerHooks() {\\n\\n      S.addHook(this._hookPre.bind(this), {\\n        action: 'functionRun',\\n        event:  'pre'\\n      });\\n\\n      S.addHook(this._hookPost.bind(this), {\\n        action: 'functionRun',\\n        event:  'post'\\n      });\\n\\n      return BbPromise.resolve();\\n    }\\n\\n    _customAction(evt) {\\n\\n      let _this = this;\\n\\n      return new BbPromise(function (resolve, reject) {\\n\\n        // console.log(evt)           // Contains Action Specific data\\n        // console.log(_this.S)       // Contains Project Specific data\\n        // console.log(_this.S.state) // Contains tons of useful methods for you to use in your plugin.  It's the official API for plugin developers.\\n\\n        console.log('-------------------');\\n        console.log('YOU JUST RAN YOUR CUSTOM ACTION, NICE!');\\n        console.log('-------------------');\\n\\n        return resolve(evt);\\n\\n      });\\n    }\\n\\n    _hookPre(evt) {\\n\\n      let _this = this;\\n\\n      return new BbPromise(function (resolve, reject) {\\n\\n        console.log('-------------------');\\n        console.log('YOUR SERVERLESS PLUGIN\\\\'S CUSTOM \\\"PRE\\\" HOOK HAS RUN BEFORE \\\"FunctionRun\\\"');\\n        console.log('-------------------');\\n\\n        return resolve(evt);\\n\\n      });\\n    }\\n\\n    _hookPost(evt) {\\n\\n      let _this = this;\\n\\n      return new BbPromise(function (resolve, reject) {\\n\\n        console.log('-------------------');\\n        console.log('YOUR SERVERLESS PLUGIN\\\\'S CUSTOM \\\"POST\\\" HOOK HAS RUN AFTER \\\"FunctionRun\\\"');\\n        console.log('-------------------');\\n\\n        return resolve(evt);\\n\\n      });\\n    }\\n  }\\n  return PluginBoilerplate;\\n\\n};\\n\",\n      \"language\": \"javascript\",\n      \"name\": \"index.js\"\n    }\n  ]\n}\n[/block]\nAs you can see, we're passing the Serverless class instance `S` to our plugin. This instance is the starting point for the entire Serverless API, which gives you all the power you need to start writing your plugin and integrate it with your project. Keep reading!","category":"56dac0483dede50b00eacb54","createdAt":"2016-01-18T06:16:55.765Z","excerpt":"Creating and exploring your first Serverless Plugin.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":1,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"serverless","sync_unique":"","title":"Your First Plugin","type":"basic","updates":["56b302d4af176a0d00964c87"],"user":"5611c1e58c76a61900fd0739","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Your First Plugin

Creating and exploring your first Serverless Plugin.

The first step for creating a new plugin is to create the initial boilerplate by running the following command in the root directory of your project: ``` serverless plugin create ``` This command will ask you for a plugin name, and then create a new folder in the root directory of your Serverless project called `plugins` if it doesn't already exist. It will also create a subfolder inside that `plugins` folder with the name you provided, along with some boilerplate files for the plugin. The most important file that will be generated is the `index.js`. This is where you'll be developing your plugin. If you open this `index.js` file you'll find some starter code and helpful comments to get you started. You can check the full file with all the comments by [clicking here](https://github.com/serverless/serverless/blob/master/lib/templates/plugin/index.js). Here's a simpler version: [block:code] { "codes": [ { "code": "'use strict';\n\n\nconst path = require('path'),\n fs = require('fs'),\n BbPromise = require('bluebird'); // Serverless uses Bluebird Promises and we recommend you do to because they provide more than your average Promise :)\n\nmodule.exports = function(S) { // Always pass in the ServerlessPlugin Class\n\n /**\n * Adding/Manipulating Serverless classes\n * - You can add or manipulate Serverless classes like this\n */\n\n S.classes.Project.newStaticMethod = function() { console.log(\"A new method!\"); };\n S.classes.Project.prototype.newMethod = function() { S.classes.Project.newStaticMethod(); };\n\n /**\n * Extending the Plugin Class\n * - Here is how you can add custom Actions and Hooks to Serverless.\n * - This class is only required if you want to add Actions and Hooks.\n */\n\n class PluginBoilerplate extends S.classes.Plugin {\n\n constructor() {\n super();\n this.name = 'myPlugin';\n }\n\n registerActions() {\n\n S.addAction(this._customAction.bind(this), {\n handler: 'customAction',\n description: 'A custom action from a custom plugin',\n context: 'custom',\n contextAction: 'run',\n options: [{ \n option: 'option',\n shortcut: 'o',\n description: 'test option 1'\n }],\n parameters: [ \n {\n parameter: 'paths',\n description: 'One or multiple paths to your function',\n position: '0->'\n }\n ]\n });\n\n return BbPromise.resolve();\n }\n\n registerHooks() {\n\n S.addHook(this._hookPre.bind(this), {\n action: 'functionRun',\n event: 'pre'\n });\n\n S.addHook(this._hookPost.bind(this), {\n action: 'functionRun',\n event: 'post'\n });\n\n return BbPromise.resolve();\n }\n\n _customAction(evt) {\n\n let _this = this;\n\n return new BbPromise(function (resolve, reject) {\n\n // console.log(evt) // Contains Action Specific data\n // console.log(_this.S) // Contains Project Specific data\n // console.log(_this.S.state) // Contains tons of useful methods for you to use in your plugin. It's the official API for plugin developers.\n\n console.log('-------------------');\n console.log('YOU JUST RAN YOUR CUSTOM ACTION, NICE!');\n console.log('-------------------');\n\n return resolve(evt);\n\n });\n }\n\n _hookPre(evt) {\n\n let _this = this;\n\n return new BbPromise(function (resolve, reject) {\n\n console.log('-------------------');\n console.log('YOUR SERVERLESS PLUGIN\\'S CUSTOM \"PRE\" HOOK HAS RUN BEFORE \"FunctionRun\"');\n console.log('-------------------');\n\n return resolve(evt);\n\n });\n }\n\n _hookPost(evt) {\n\n let _this = this;\n\n return new BbPromise(function (resolve, reject) {\n\n console.log('-------------------');\n console.log('YOUR SERVERLESS PLUGIN\\'S CUSTOM \"POST\" HOOK HAS RUN AFTER \"FunctionRun\"');\n console.log('-------------------');\n\n return resolve(evt);\n\n });\n }\n }\n return PluginBoilerplate;\n\n};\n", "language": "javascript", "name": "index.js" } ] } [/block] As you can see, we're passing the Serverless class instance `S` to our plugin. This instance is the starting point for the entire Serverless API, which gives you all the power you need to start writing your plugin and integrate it with your project. Keep reading!
The first step for creating a new plugin is to create the initial boilerplate by running the following command in the root directory of your project: ``` serverless plugin create ``` This command will ask you for a plugin name, and then create a new folder in the root directory of your Serverless project called `plugins` if it doesn't already exist. It will also create a subfolder inside that `plugins` folder with the name you provided, along with some boilerplate files for the plugin. The most important file that will be generated is the `index.js`. This is where you'll be developing your plugin. If you open this `index.js` file you'll find some starter code and helpful comments to get you started. You can check the full file with all the comments by [clicking here](https://github.com/serverless/serverless/blob/master/lib/templates/plugin/index.js). Here's a simpler version: [block:code] { "codes": [ { "code": "'use strict';\n\n\nconst path = require('path'),\n fs = require('fs'),\n BbPromise = require('bluebird'); // Serverless uses Bluebird Promises and we recommend you do to because they provide more than your average Promise :)\n\nmodule.exports = function(S) { // Always pass in the ServerlessPlugin Class\n\n /**\n * Adding/Manipulating Serverless classes\n * - You can add or manipulate Serverless classes like this\n */\n\n S.classes.Project.newStaticMethod = function() { console.log(\"A new method!\"); };\n S.classes.Project.prototype.newMethod = function() { S.classes.Project.newStaticMethod(); };\n\n /**\n * Extending the Plugin Class\n * - Here is how you can add custom Actions and Hooks to Serverless.\n * - This class is only required if you want to add Actions and Hooks.\n */\n\n class PluginBoilerplate extends S.classes.Plugin {\n\n constructor() {\n super();\n this.name = 'myPlugin';\n }\n\n registerActions() {\n\n S.addAction(this._customAction.bind(this), {\n handler: 'customAction',\n description: 'A custom action from a custom plugin',\n context: 'custom',\n contextAction: 'run',\n options: [{ \n option: 'option',\n shortcut: 'o',\n description: 'test option 1'\n }],\n parameters: [ \n {\n parameter: 'paths',\n description: 'One or multiple paths to your function',\n position: '0->'\n }\n ]\n });\n\n return BbPromise.resolve();\n }\n\n registerHooks() {\n\n S.addHook(this._hookPre.bind(this), {\n action: 'functionRun',\n event: 'pre'\n });\n\n S.addHook(this._hookPost.bind(this), {\n action: 'functionRun',\n event: 'post'\n });\n\n return BbPromise.resolve();\n }\n\n _customAction(evt) {\n\n let _this = this;\n\n return new BbPromise(function (resolve, reject) {\n\n // console.log(evt) // Contains Action Specific data\n // console.log(_this.S) // Contains Project Specific data\n // console.log(_this.S.state) // Contains tons of useful methods for you to use in your plugin. It's the official API for plugin developers.\n\n console.log('-------------------');\n console.log('YOU JUST RAN YOUR CUSTOM ACTION, NICE!');\n console.log('-------------------');\n\n return resolve(evt);\n\n });\n }\n\n _hookPre(evt) {\n\n let _this = this;\n\n return new BbPromise(function (resolve, reject) {\n\n console.log('-------------------');\n console.log('YOUR SERVERLESS PLUGIN\\'S CUSTOM \"PRE\" HOOK HAS RUN BEFORE \"FunctionRun\"');\n console.log('-------------------');\n\n return resolve(evt);\n\n });\n }\n\n _hookPost(evt) {\n\n let _this = this;\n\n return new BbPromise(function (resolve, reject) {\n\n console.log('-------------------');\n console.log('YOUR SERVERLESS PLUGIN\\'S CUSTOM \"POST\" HOOK HAS RUN AFTER \"FunctionRun\"');\n console.log('-------------------');\n\n return resolve(evt);\n\n });\n }\n }\n return PluginBoilerplate;\n\n};\n", "language": "javascript", "name": "index.js" } ] } [/block] As you can see, we're passing the Serverless class instance `S` to our plugin. This instance is the starting point for the entire Serverless API, which gives you all the power you need to start writing your plugin and integrate it with your project. Keep reading!
{"__v":4,"_id":"56dac04a3dede50b00eacb7b","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"To start manipulating your project and add extra functionality, you'll need to use the Serverless API. Within your action method, you have access to the Serverless instance. This instance is the starting point of the whole Serverless API. It gives you access to all of the Serverless classes, each of which has tons of helpful methods for manipulating Serverless projects. This Serverless instance itself has some methods to get you started with the classes quickly. On top of that, the Serverless instance also gives you access to all of the useful utility functions that we've written.\n\nThe Serverless instance is directly passed to your plugin. So you can use it right away like this like this...\n\n```\n_customAction(evt) {\n\tlet Serverless = S;\n}\n```\n\n# Serverless Classes\nBelow is a list of all of our classes. Using these classes and their methods together will give you complete control over your project, allowing you to extend the framework core functionality very easily. Because we're making rapid changes and moving fast, we're keeping all the docs for the API methods inline in classes files, that makes it easier to maintain and encourages users to be familiar with our codebase. So to learn more on each class, checkout its file:\n\n- **Project:** This class represents a single Serverless project. This is your entry point to all of the other classes. For a full API Reference on this class constructor and methods, [checkout the Project class file](https://github.com/serverless/serverless/blob/master/lib/Project.js).\n- **Function:** This class represents a single Serverless function. For a full API Reference on this class constructor and methods, [checkout the Function class file](https://github.com/serverless/serverless/blob/master/lib/Function.js).\n- **Endpoint:** This class represents a single Serverless endpoint. For a full API Reference on this class constructor and methods, [checkout the Endpoint class file](https://github.com/serverless/serverless/blob/master/lib/Endpoint.js).\n- **Event:** This class represents a single Serverless event. For a full API Reference on this class constructor and methods, [checkout the Event class file](https://github.com/serverless/serverless/blob/master/lib/Event.js).\n- **Stage:** This class represents a single Serverless stage. For a full API Reference on this class constructor and methods, [checkout the Stage class file](https://github.com/serverless/serverless/blob/master/lib/Stage.js).\n- **Region:** This class represents a single Serverless region. For a full API Reference on this class constructor and methods, [checkout the Region class file](https://github.com/serverless/serverless/blob/master/lib/Region.js).\n- **Resources:** This class represents your project CloudFormation resources. For a full API Reference on this class constructor and methods, [checkout the Resources class file](https://github.com/serverless/serverless/blob/master/lib/Resources.js).\n- **Variables:** This class represent your project variables. For a full API Reference on this class constructor and methods, [checkout the Variables class file](https://github.com/serverless/serverless/blob/master/lib/Variables.js).\n- **Templates:** This class represents your project templates. For a full API Reference on this class constructor and methods, [checkout the Templates class file](https://github.com/serverless/serverless/blob/master/lib/Templates.js).\n- **ProviderAws:** This class represents AWS as a provider. For a full API Reference on this class constructor and methods, [checkout the ProviderAws class file](https://github.com/serverless/serverless/blob/master/lib/ProviderAws.js).\n \n\nThis Serverless instance has a property called `classes`, which is just an object that contains each of our classes. So you can init each class like this:\n\n```\nlet Project = new S.classes.Project(...);\nlet Function = new S.classes.Function(...);\n// and so on\n```\n\nOf course each class is constructed differently and requires different parameters. Checkout each class file constructor for better understanding of what's needed.\n\n# Serverless Methods\nMost of the time, you won't need to initialize the Serverless classes mentioned earlier, instead, we're providing some helpful methods in the Serverless instance that makes it easy to get started manipulating Serverless projects.\n\n## getProject()\n\n```\nlet Project = S.getProject();\n```\n\nReturns a **Project class instance** that contains all your project data. You can then use all the methods of the Project class for more power.\n\n## getProvider()\n\n```\nlet aws = S.getProvider();\n```\n\nReturns a **Provider class instance** that contains powerful methods to interact with AWS.\n\n## updateConfig(config)\n\n```\nS.updateConfig({ projectPath: 'path/to/project' });\n```\n\nUpdates the Serverless Instances configuration. Useful when you want to set a project to the Serverless instance by providing a `projectPath`.\n\n# Serverless Utilities\nTo give you even more power, we've included all of the utility functions we're using in our framework in the Serverless instance, giving you access to some common functionalities that are otherwise tedious to implement. You can access the utility functions through the `S.utils` property of the Serverless instance.\n\nBelow is a simple example utility function that checks whether a directory exists or not and returns a **Boolean**. To learn more about all of our utility functions, checkout the `utils` file. It's pretty well documented :)\n\n```\nS.utils.dirExistsSync('path/to/dir');\n```","category":"56dac0483dede50b00eacb54","createdAt":"2016-01-18T06:17:23.420Z","excerpt":"Exploring the Serverless API, its classes and methods.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":2,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"state","sync_unique":"","title":"The Serverless API","type":"basic","updates":[],"user":"5611c1e58c76a61900fd0739","version":"56dac0473dede50b00eacb50","childrenPages":[]}

The Serverless API

Exploring the Serverless API, its classes and methods.

To start manipulating your project and add extra functionality, you'll need to use the Serverless API. Within your action method, you have access to the Serverless instance. This instance is the starting point of the whole Serverless API. It gives you access to all of the Serverless classes, each of which has tons of helpful methods for manipulating Serverless projects. This Serverless instance itself has some methods to get you started with the classes quickly. On top of that, the Serverless instance also gives you access to all of the useful utility functions that we've written. The Serverless instance is directly passed to your plugin. So you can use it right away like this like this... ``` _customAction(evt) { let Serverless = S; } ``` # Serverless Classes Below is a list of all of our classes. Using these classes and their methods together will give you complete control over your project, allowing you to extend the framework core functionality very easily. Because we're making rapid changes and moving fast, we're keeping all the docs for the API methods inline in classes files, that makes it easier to maintain and encourages users to be familiar with our codebase. So to learn more on each class, checkout its file: - **Project:** This class represents a single Serverless project. This is your entry point to all of the other classes. For a full API Reference on this class constructor and methods, [checkout the Project class file](https://github.com/serverless/serverless/blob/master/lib/Project.js). - **Function:** This class represents a single Serverless function. For a full API Reference on this class constructor and methods, [checkout the Function class file](https://github.com/serverless/serverless/blob/master/lib/Function.js). - **Endpoint:** This class represents a single Serverless endpoint. For a full API Reference on this class constructor and methods, [checkout the Endpoint class file](https://github.com/serverless/serverless/blob/master/lib/Endpoint.js). - **Event:** This class represents a single Serverless event. For a full API Reference on this class constructor and methods, [checkout the Event class file](https://github.com/serverless/serverless/blob/master/lib/Event.js). - **Stage:** This class represents a single Serverless stage. For a full API Reference on this class constructor and methods, [checkout the Stage class file](https://github.com/serverless/serverless/blob/master/lib/Stage.js). - **Region:** This class represents a single Serverless region. For a full API Reference on this class constructor and methods, [checkout the Region class file](https://github.com/serverless/serverless/blob/master/lib/Region.js). - **Resources:** This class represents your project CloudFormation resources. For a full API Reference on this class constructor and methods, [checkout the Resources class file](https://github.com/serverless/serverless/blob/master/lib/Resources.js). - **Variables:** This class represent your project variables. For a full API Reference on this class constructor and methods, [checkout the Variables class file](https://github.com/serverless/serverless/blob/master/lib/Variables.js). - **Templates:** This class represents your project templates. For a full API Reference on this class constructor and methods, [checkout the Templates class file](https://github.com/serverless/serverless/blob/master/lib/Templates.js). - **ProviderAws:** This class represents AWS as a provider. For a full API Reference on this class constructor and methods, [checkout the ProviderAws class file](https://github.com/serverless/serverless/blob/master/lib/ProviderAws.js). This Serverless instance has a property called `classes`, which is just an object that contains each of our classes. So you can init each class like this: ``` let Project = new S.classes.Project(...); let Function = new S.classes.Function(...); // and so on ``` Of course each class is constructed differently and requires different parameters. Checkout each class file constructor for better understanding of what's needed. # Serverless Methods Most of the time, you won't need to initialize the Serverless classes mentioned earlier, instead, we're providing some helpful methods in the Serverless instance that makes it easy to get started manipulating Serverless projects. ## getProject() ``` let Project = S.getProject(); ``` Returns a **Project class instance** that contains all your project data. You can then use all the methods of the Project class for more power. ## getProvider() ``` let aws = S.getProvider(); ``` Returns a **Provider class instance** that contains powerful methods to interact with AWS. ## updateConfig(config) ``` S.updateConfig({ projectPath: 'path/to/project' }); ``` Updates the Serverless Instances configuration. Useful when you want to set a project to the Serverless instance by providing a `projectPath`. # Serverless Utilities To give you even more power, we've included all of the utility functions we're using in our framework in the Serverless instance, giving you access to some common functionalities that are otherwise tedious to implement. You can access the utility functions through the `S.utils` property of the Serverless instance. Below is a simple example utility function that checks whether a directory exists or not and returns a **Boolean**. To learn more about all of our utility functions, checkout the `utils` file. It's pretty well documented :) ``` S.utils.dirExistsSync('path/to/dir'); ```
To start manipulating your project and add extra functionality, you'll need to use the Serverless API. Within your action method, you have access to the Serverless instance. This instance is the starting point of the whole Serverless API. It gives you access to all of the Serverless classes, each of which has tons of helpful methods for manipulating Serverless projects. This Serverless instance itself has some methods to get you started with the classes quickly. On top of that, the Serverless instance also gives you access to all of the useful utility functions that we've written. The Serverless instance is directly passed to your plugin. So you can use it right away like this like this... ``` _customAction(evt) { let Serverless = S; } ``` # Serverless Classes Below is a list of all of our classes. Using these classes and their methods together will give you complete control over your project, allowing you to extend the framework core functionality very easily. Because we're making rapid changes and moving fast, we're keeping all the docs for the API methods inline in classes files, that makes it easier to maintain and encourages users to be familiar with our codebase. So to learn more on each class, checkout its file: - **Project:** This class represents a single Serverless project. This is your entry point to all of the other classes. For a full API Reference on this class constructor and methods, [checkout the Project class file](https://github.com/serverless/serverless/blob/master/lib/Project.js). - **Function:** This class represents a single Serverless function. For a full API Reference on this class constructor and methods, [checkout the Function class file](https://github.com/serverless/serverless/blob/master/lib/Function.js). - **Endpoint:** This class represents a single Serverless endpoint. For a full API Reference on this class constructor and methods, [checkout the Endpoint class file](https://github.com/serverless/serverless/blob/master/lib/Endpoint.js). - **Event:** This class represents a single Serverless event. For a full API Reference on this class constructor and methods, [checkout the Event class file](https://github.com/serverless/serverless/blob/master/lib/Event.js). - **Stage:** This class represents a single Serverless stage. For a full API Reference on this class constructor and methods, [checkout the Stage class file](https://github.com/serverless/serverless/blob/master/lib/Stage.js). - **Region:** This class represents a single Serverless region. For a full API Reference on this class constructor and methods, [checkout the Region class file](https://github.com/serverless/serverless/blob/master/lib/Region.js). - **Resources:** This class represents your project CloudFormation resources. For a full API Reference on this class constructor and methods, [checkout the Resources class file](https://github.com/serverless/serverless/blob/master/lib/Resources.js). - **Variables:** This class represent your project variables. For a full API Reference on this class constructor and methods, [checkout the Variables class file](https://github.com/serverless/serverless/blob/master/lib/Variables.js). - **Templates:** This class represents your project templates. For a full API Reference on this class constructor and methods, [checkout the Templates class file](https://github.com/serverless/serverless/blob/master/lib/Templates.js). - **ProviderAws:** This class represents AWS as a provider. For a full API Reference on this class constructor and methods, [checkout the ProviderAws class file](https://github.com/serverless/serverless/blob/master/lib/ProviderAws.js). This Serverless instance has a property called `classes`, which is just an object that contains each of our classes. So you can init each class like this: ``` let Project = new S.classes.Project(...); let Function = new S.classes.Function(...); // and so on ``` Of course each class is constructed differently and requires different parameters. Checkout each class file constructor for better understanding of what's needed. # Serverless Methods Most of the time, you won't need to initialize the Serverless classes mentioned earlier, instead, we're providing some helpful methods in the Serverless instance that makes it easy to get started manipulating Serverless projects. ## getProject() ``` let Project = S.getProject(); ``` Returns a **Project class instance** that contains all your project data. You can then use all the methods of the Project class for more power. ## getProvider() ``` let aws = S.getProvider(); ``` Returns a **Provider class instance** that contains powerful methods to interact with AWS. ## updateConfig(config) ``` S.updateConfig({ projectPath: 'path/to/project' }); ``` Updates the Serverless Instances configuration. Useful when you want to set a project to the Serverless instance by providing a `projectPath`. # Serverless Utilities To give you even more power, we've included all of the utility functions we're using in our framework in the Serverless instance, giving you access to some common functionalities that are otherwise tedious to implement. You can access the utility functions through the `S.utils` property of the Serverless instance. Below is a simple example utility function that checks whether a directory exists or not and returns a **Boolean**. To learn more about all of our utility functions, checkout the `utils` file. It's pretty well documented :) ``` S.utils.dirExistsSync('path/to/dir'); ```
{"__v":3,"_id":"56dac04a3dede50b00eacb7c","api":{"auth":"required","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":""},"body":"Now that you've learned about creating plugins and the Serverless API, let's take a closer look at what your plugin can accomplish and what you have access to by using the Serverless API. We'll demonstrate the most common tasks and how using a combination of Serverless classes and methods will make that very easy.\n\nIt's recommended that you take a look at the classes files mentioned earlier and read the inline docs to get a basic idea of what each method does, since we'll be using these methods in this section.\n\n## Project, Functions and Other Assets\nHere's a demo of how you can play around with your project assets (functions, endpoints, events, resources...etc)\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"// returns a Project instance\\nlet Project = S.getProject();\\n\\n// returns an array of Function instances. Which are all the functions that exist in your project\\nlet allFunctions = Project.getAllFunctions();\\n\\n// returns a Function instance\\nlet specificFunction = Project.getFunction('myFuncName');\\n\\n// returns `myFuncName`\\nspecificFunction.getName();\\n\\n// returns an array of Endpoint instances, which are all the endpoints of this function.\\nspecificFunction.getAllEndpoints();\\n\\n// returns an array of Event instances, which are all the events of this function.\\nspecificFunction.getAllEvents();\\n\\n// returns the name of the deployed lambda\\nlet options = {\\n    stage: \\\"dev\\\",\\n    region: \\\"us-east-1\\\"\\n}\\nspecificFunction.getDeployedName(options);\\n\\n// returns an array of all the Endpoint instances in your project\\nlet AllEndpoints = Project.getAllEndpoints();\\n\\n// returns an array of all the Event instances in your project\\nlet AllEvents = Project.getAllEvents();\\n\\n// returns a Resources instance\\nlet projectResources = Project.getResources();\\n\",\n      \"language\": \"javascript\",\n      \"name\": \"index.js\"\n    }\n  ]\n}\n[/block]\n## Stages, Regions, and Variables\nYou can play around with project stages, regions and variables like this...\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"// getting the project\\nlet Project = S.getProject();\\n\\n// returns an array of Stage instances\\nlet allStages = Project.getAllStages();\\n\\n// returns a Stage instance\\nlet specificStage = Project.getStage('dev');\\n\\n// returns an array of Region instances that exist in that specific stage\\nlet regionsInStage = specificStage.getAllRegions();\\n\\n// or as a shortcut you can get stage regions using the Project instance instead\\nlet regionsInStage = Project.getAllRegions('dev');\\n\\n// returns a Variables instance which includes all the variables of a specific stage\\nlet varsInStage = specificStage.getVariables();\\n\\n// returns a specific Region instance in a specific stage\\nlet region = specificStage.getRegion('us-east-1');\\n\\n// returns a Variables instance which include all the variables of a specific region\\nlet varsInRegion = region.getVariables();\\n\\n// add a new region to a stage\\nlet newRegion  = new S.classes.Region({ name: 'us-west-2' }, specificStage);\\nspecificStage.setRegion(newRegion);\\n\\n// add new variables to a region\\nlet region = Project.getRegion('dev', 'us-east-1');\\nlet newVars = {\\n    variableOne: \\\"someValue\\\",\\n     variableTwo: \\\"someOtherValue\\\"\\n}\\nregion.addVariables(newVars);\",\n      \"language\": \"javascript\",\n      \"name\": \"index.js\"\n    }\n  ]\n}\n[/block]\n## Getting Populated Data\nSometimes you need project data to be populated with any referenced variables according to stage/region instead of returning the variable syntax to you (ie. `\"name\":\"${myVar}\"`). You can populate the referenced variables in your assets data like this...\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"// get the project as usual, then get a function and get its data populated with any referenced variables\\nlet Project = S.getProject();\\nlet myFunc = Project.getFunction('myFunc');\\nlet options = {\\n    stage: \\\"dev\\\",\\n    region: \\\"us-east-1\\\"\\n}\\nlet populatedFunc = myFunc.toObjectPopulated(options);\\n\\n// you can also do that with any other asset, not just Functions, for example, an Endpoint...\\nlet Endpoint = Project.getEndpoint('users/create~GET');\\n\\nlet options = {\\n    stage: \\\"dev\\\",\\n    region: \\\"us-east-1\\\"\\n}\\nlet populatedEndpoint = Endpoint.toObjectPopulated(options);\",\n      \"language\": \"javascript\",\n      \"name\": \"index.js\"\n    }\n  ]\n}\n[/block]\n## Updating and Saving Data\nTo update and save data in the file system, you first need to get an object literal from an instance, then manipulate that object however you like, then update the instance using that new updated object. Then, you can save that new instance...\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"let Project = S.getProject();\\n\\n// updating function data for demo. You can follow the same steps for other assets too (ie. Endpoints...etc)\\nlet myFunc = Project.getFunction('myFunc');\\n\\n// convert to object\\nmyFuncObj = myFunc.toObject();\\n\\n// make changes!\\nmyFuncObj.timeout = 10;\\n\\n// update the instance\\nmyFunc.fromObject(myFuncObj);\\n\\n// persist to file system\\nmyFunc.save()\",\n      \"language\": \"javascript\",\n      \"name\": \"index.js\"\n    }\n  ]\n}\n[/block]","category":"56dac0483dede50b00eacb54","createdAt":"2016-01-18T06:17:28.963Z","excerpt":"Some demos on how to use the Serverless API to accomplish common tasks.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","order":3,"parentDoc":null,"project":"5611c207f2aeda0d002b3734","slug":"project","sync_unique":"","title":"Putting It All Together","type":"basic","updates":[],"user":"5611c1e58c76a61900fd0739","version":"56dac0473dede50b00eacb50","childrenPages":[]}

Putting It All Together

Some demos on how to use the Serverless API to accomplish common tasks.

Now that you've learned about creating plugins and the Serverless API, let's take a closer look at what your plugin can accomplish and what you have access to by using the Serverless API. We'll demonstrate the most common tasks and how using a combination of Serverless classes and methods will make that very easy. It's recommended that you take a look at the classes files mentioned earlier and read the inline docs to get a basic idea of what each method does, since we'll be using these methods in this section. ## Project, Functions and Other Assets Here's a demo of how you can play around with your project assets (functions, endpoints, events, resources...etc) [block:code] { "codes": [ { "code": "// returns a Project instance\nlet Project = S.getProject();\n\n// returns an array of Function instances. Which are all the functions that exist in your project\nlet allFunctions = Project.getAllFunctions();\n\n// returns a Function instance\nlet specificFunction = Project.getFunction('myFuncName');\n\n// returns `myFuncName`\nspecificFunction.getName();\n\n// returns an array of Endpoint instances, which are all the endpoints of this function.\nspecificFunction.getAllEndpoints();\n\n// returns an array of Event instances, which are all the events of this function.\nspecificFunction.getAllEvents();\n\n// returns the name of the deployed lambda\nlet options = {\n stage: \"dev\",\n region: \"us-east-1\"\n}\nspecificFunction.getDeployedName(options);\n\n// returns an array of all the Endpoint instances in your project\nlet AllEndpoints = Project.getAllEndpoints();\n\n// returns an array of all the Event instances in your project\nlet AllEvents = Project.getAllEvents();\n\n// returns a Resources instance\nlet projectResources = Project.getResources();\n", "language": "javascript", "name": "index.js" } ] } [/block] ## Stages, Regions, and Variables You can play around with project stages, regions and variables like this... [block:code] { "codes": [ { "code": "// getting the project\nlet Project = S.getProject();\n\n// returns an array of Stage instances\nlet allStages = Project.getAllStages();\n\n// returns a Stage instance\nlet specificStage = Project.getStage('dev');\n\n// returns an array of Region instances that exist in that specific stage\nlet regionsInStage = specificStage.getAllRegions();\n\n// or as a shortcut you can get stage regions using the Project instance instead\nlet regionsInStage = Project.getAllRegions('dev');\n\n// returns a Variables instance which includes all the variables of a specific stage\nlet varsInStage = specificStage.getVariables();\n\n// returns a specific Region instance in a specific stage\nlet region = specificStage.getRegion('us-east-1');\n\n// returns a Variables instance which include all the variables of a specific region\nlet varsInRegion = region.getVariables();\n\n// add a new region to a stage\nlet newRegion = new S.classes.Region({ name: 'us-west-2' }, specificStage);\nspecificStage.setRegion(newRegion);\n\n// add new variables to a region\nlet region = Project.getRegion('dev', 'us-east-1');\nlet newVars = {\n variableOne: \"someValue\",\n variableTwo: \"someOtherValue\"\n}\nregion.addVariables(newVars);", "language": "javascript", "name": "index.js" } ] } [/block] ## Getting Populated Data Sometimes you need project data to be populated with any referenced variables according to stage/region instead of returning the variable syntax to you (ie. `"name":"${myVar}"`). You can populate the referenced variables in your assets data like this... [block:code] { "codes": [ { "code": "// get the project as usual, then get a function and get its data populated with any referenced variables\nlet Project = S.getProject();\nlet myFunc = Project.getFunction('myFunc');\nlet options = {\n stage: \"dev\",\n region: \"us-east-1\"\n}\nlet populatedFunc = myFunc.toObjectPopulated(options);\n\n// you can also do that with any other asset, not just Functions, for example, an Endpoint...\nlet Endpoint = Project.getEndpoint('users/create~GET');\n\nlet options = {\n stage: \"dev\",\n region: \"us-east-1\"\n}\nlet populatedEndpoint = Endpoint.toObjectPopulated(options);", "language": "javascript", "name": "index.js" } ] } [/block] ## Updating and Saving Data To update and save data in the file system, you first need to get an object literal from an instance, then manipulate that object however you like, then update the instance using that new updated object. Then, you can save that new instance... [block:code] { "codes": [ { "code": "let Project = S.getProject();\n\n// updating function data for demo. You can follow the same steps for other assets too (ie. Endpoints...etc)\nlet myFunc = Project.getFunction('myFunc');\n\n// convert to object\nmyFuncObj = myFunc.toObject();\n\n// make changes!\nmyFuncObj.timeout = 10;\n\n// update the instance\nmyFunc.fromObject(myFuncObj);\n\n// persist to file system\nmyFunc.save()", "language": "javascript", "name": "index.js" } ] } [/block]
Now that you've learned about creating plugins and the Serverless API, let's take a closer look at what your plugin can accomplish and what you have access to by using the Serverless API. We'll demonstrate the most common tasks and how using a combination of Serverless classes and methods will make that very easy. It's recommended that you take a look at the classes files mentioned earlier and read the inline docs to get a basic idea of what each method does, since we'll be using these methods in this section. ## Project, Functions and Other Assets Here's a demo of how you can play around with your project assets (functions, endpoints, events, resources...etc) [block:code] { "codes": [ { "code": "// returns a Project instance\nlet Project = S.getProject();\n\n// returns an array of Function instances. Which are all the functions that exist in your project\nlet allFunctions = Project.getAllFunctions();\n\n// returns a Function instance\nlet specificFunction = Project.getFunction('myFuncName');\n\n// returns `myFuncName`\nspecificFunction.getName();\n\n// returns an array of Endpoint instances, which are all the endpoints of this function.\nspecificFunction.getAllEndpoints();\n\n// returns an array of Event instances, which are all the events of this function.\nspecificFunction.getAllEvents();\n\n// returns the name of the deployed lambda\nlet options = {\n stage: \"dev\",\n region: \"us-east-1\"\n}\nspecificFunction.getDeployedName(options);\n\n// returns an array of all the Endpoint instances in your project\nlet AllEndpoints = Project.getAllEndpoints();\n\n// returns an array of all the Event instances in your project\nlet AllEvents = Project.getAllEvents();\n\n// returns a Resources instance\nlet projectResources = Project.getResources();\n", "language": "javascript", "name": "index.js" } ] } [/block] ## Stages, Regions, and Variables You can play around with project stages, regions and variables like this... [block:code] { "codes": [ { "code": "// getting the project\nlet Project = S.getProject();\n\n// returns an array of Stage instances\nlet allStages = Project.getAllStages();\n\n// returns a Stage instance\nlet specificStage = Project.getStage('dev');\n\n// returns an array of Region instances that exist in that specific stage\nlet regionsInStage = specificStage.getAllRegions();\n\n// or as a shortcut you can get stage regions using the Project instance instead\nlet regionsInStage = Project.getAllRegions('dev');\n\n// returns a Variables instance which includes all the variables of a specific stage\nlet varsInStage = specificStage.getVariables();\n\n// returns a specific Region instance in a specific stage\nlet region = specificStage.getRegion('us-east-1');\n\n// returns a Variables instance which include all the variables of a specific region\nlet varsInRegion = region.getVariables();\n\n// add a new region to a stage\nlet newRegion = new S.classes.Region({ name: 'us-west-2' }, specificStage);\nspecificStage.setRegion(newRegion);\n\n// add new variables to a region\nlet region = Project.getRegion('dev', 'us-east-1');\nlet newVars = {\n variableOne: \"someValue\",\n variableTwo: \"someOtherValue\"\n}\nregion.addVariables(newVars);", "language": "javascript", "name": "index.js" } ] } [/block] ## Getting Populated Data Sometimes you need project data to be populated with any referenced variables according to stage/region instead of returning the variable syntax to you (ie. `"name":"${myVar}"`). You can populate the referenced variables in your assets data like this... [block:code] { "codes": [ { "code": "// get the project as usual, then get a function and get its data populated with any referenced variables\nlet Project = S.getProject();\nlet myFunc = Project.getFunction('myFunc');\nlet options = {\n stage: \"dev\",\n region: \"us-east-1\"\n}\nlet populatedFunc = myFunc.toObjectPopulated(options);\n\n// you can also do that with any other asset, not just Functions, for example, an Endpoint...\nlet Endpoint = Project.getEndpoint('users/create~GET');\n\nlet options = {\n stage: \"dev\",\n region: \"us-east-1\"\n}\nlet populatedEndpoint = Endpoint.toObjectPopulated(options);", "language": "javascript", "name": "index.js" } ] } [/block] ## Updating and Saving Data To update and save data in the file system, you first need to get an object literal from an instance, then manipulate that object however you like, then update the instance using that new updated object. Then, you can save that new instance... [block:code] { "codes": [ { "code": "let Project = S.getProject();\n\n// updating function data for demo. You can follow the same steps for other assets too (ie. Endpoints...etc)\nlet myFunc = Project.getFunction('myFunc');\n\n// convert to object\nmyFuncObj = myFunc.toObject();\n\n// make changes!\nmyFuncObj.timeout = 10;\n\n// update the instance\nmyFunc.fromObject(myFuncObj);\n\n// persist to file system\nmyFunc.save()", "language": "javascript", "name": "index.js" } ] } [/block]