graph initcommand can be used to set up a new subgraph project, either from an existing contract on any of the public Ethereum networks or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in
graph init --product subgraph-studio. If you already have a smart contract deployed to Ethereum mainnet or one of the testnets, bootstrapping a new subgraph from that contract can be a good way to get started. But first, a little about the networks The Graph supports.
xdai(now known as Gnosis Chain)
matic(now known as Polygon)
bsc(now known as BNB Chain)
<SUBGRAPH_SLUG>is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page.
graph initsupports is creating a new project from an example subgraph. The following command does this:
UpdateGravatarevents whenever avatars are created or updated. The subgraph handles these events by writing
Gravatarentities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example.
subgraph.yamldefines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found here.
description: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the Hosted Service.
repository: the URL of the repository where the subgraph manifest can be found. This is also displayed by The Graph Explorer.
dataSources.source: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts.
dataSources.source.startBlock: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created.
dataSources.mapping.entities: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file.
dataSources.mapping.abis: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings.
dataSources.mapping.eventHandlers: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store.
dataSources.mapping.callHandlers: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store.
dataSources.mapping.blockHandlers: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a
kind: callto the handler. This will only run the handler if the block contains at least one call to the data source contract.
schema.graphql. GraphQL schemas are defined using the GraphQL interface definition language. If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the GraphQL API section.
schema.graphql, and Graph Node will generate top level fields for querying single instances and collections of that entity type. Each type that should be an entity is required to be annotated with an
@entitydirective. By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. Mutability comes at a price, and for entity types for which it is known that they will never be modified, for example, because they simply contain data extracted verbatim from the chain, it is recommended to mark them as immutable with
@entity(immutable: true). Mappings can make changes to immutable entities as long as those changes happen in the same block in which the entity was created. Immutable entities are much faster to write and to query, and should therefore be used whenever possible.
Gravatarentity below is structured around a Gravatar object and is a good example of how an entity could be defined.
GravatarDeclinedentities below are based around events. It is not recommended to map events or function calls to entities 1:1.
!in the schema. If a required field is not set in the mapping, you will receive this error when querying the field:
idfield, which must be of type
String!. It is generally recommended to use
Bytes!, unless the
idcontains human-readable text, since entities with
Bytes!id's will be faster to write and query as those with a
idfield serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type
ID!is also accepted and is a synonym for
idis constructed from the id's of two other entities; that is possible using
let id = left.id.concat(right.id)to form the id from the id's of
right. Similarly, to construct an id from the id of an existing entity and a counter
let id = left.id.concatI32(count)can be used. The concatenation is guaranteed to produce unique id's as long as the length of
leftis the same for all such entities, for example, because
SecondOwnerby first defining your entity and subsequently setting the field with
entity.tokenStatus = "SecondOwner". The example below demonstrates what the Token entity would look like with an enum field:
Transactionentity type with an optional one-to-one relationship with a
TokenBalanceentity type with a required one-to-many relationship with a Token entity type:
@derivedFromfield. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived.
Userentity type to an
Organizationentity type. In the example below, this is achieved by looking up the
membersattribute from within the
Organizationentity. In queries, the
Userwill be resolved by finding all
Organizationentities that include the user's ID.
Organizationpair with a schema like
"". This is illustrated in the example below:
_Schema_type with a fulltext directive in the GraphQL schema.
bandSearchfield can be used in queries to filter
Bandentities based on the text documents in the
biofields. Jump to GraphQL API - Queries for a description of the fulltext search API and more example usage.
mapping.eventHandlers, create an exported function of the same name. Each handler must accept a single parameter called
eventwith a type corresponding to the name of the event which is being handled.
src/mapping.tscontains handlers for the
NewGravatarevent and creates a new
new Gravatar(event.params.id.toHex()), populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable
gravatar, with an id value of
Gravatarfrom the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using
idthat is unique among all entities of the same type. An entity's
idvalue is set when the entity is created. Below are some recommended
idvalues to consider when creating new entities. NOTE: The value of
idmust be a
event.transaction.hash.toHex() + "-" + event.logIndex.toString()
package.jsonto allow you to simply run one of the following to achieve the same:
subgraph.yaml, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to
<OUTPUT_DIR>/<DATA_SOURCE_NAME>/<ABI_NAME>.ts. In the example subgraph, this would be
generated/Gravity/Gravity.ts, allowing mappings to import these types with.
save()method to write entities to store. All entity classes are written to
<OUTPUT_DIR>/schema.ts, allowing mappings to import them with
Note: The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph.
src/mapping.ts. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run
yarn buildand fix any syntax errors that the TypeScript compiler might find.
NewExchange(address,address)event handler. This is emitted when a new exchange contract is created on-chain by the factory contract.
source. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract.
Exchangetemplate and call the
Exchange.create(address)method on it to start indexing the new exchange contract.
Note: A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks.If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created.
NewExchangeevent. That information can be passed into the instantiated data source, like so:
Exchangetemplate, the context can then be accessed:
getStringfor all value types.
startBlockis an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set
startBlockto the block in which the smart contract of the data source was created.
Note: The contract creation block can be quickly looked up on Etherscan:
- 1.Search for the contract by entering its address in the search bar.
- 2.Click on the creation transaction hash in the
- 3.Load the transaction details page where you'll find the start block for that contract.
ethereum.Callas an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured.
Note: Call handlers are not supported on Rinkeby, Goerli, or Ganache. Call handlers currently depend on the Parity tracing API and these networks do not support it.
callHandlersarray under the data source you would like to subscribe to.
functionis the normalized function signature to filter calls by. The
handlerproperty is the name of the function in your mapping you would like to execute when the target function is called in the data source contract.
createGravatarfunction is called and receives a
CreateGravatarCallparameter as an argument:
handleCreateGravatarfunction takes a new
CreateGravatarCallwhich is a subclass of
ethereum.Call, provided by
@graphprotocol/graph-ts, that includes the typed inputs and outputs of the call. The
CreateGravatarCalltype is generated for you when you run
ethereum.Blockas its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities.
topic0is equal to the hash of the event signature.
0.0.7, event handlers can have access to the receipt for the transaction which emitted them.
receipt: truekey, which is optional and defaults to false.
Event.receiptfield. When the
receiptkey is set to
falseor omitted in the manifest, a
nullvalue will be returned instead.
0.0.4, subgraph features must be explicitly declared in the
featuressection at the top level of the manifest file, using their
camelCasename, as listed in the table below:
featuresfield in the manifest should be:
ipfs.map. To do this reliably, it is required that these files are pinned to an IPFS node with high availability, so that the hosted service IPFS node can find them during indexing.
Note: The Graph Network does not yet support
ipfs.map, and developers should not deploy subgraphs using that functionality to the network via the Studio.
GRAPH_ALLOW_NON_DETERMINISTIC_IPFSenvironment variable must be set in order to index subgraphs using this experimental functionality.
Note: The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio.
subgraphErrorargument. It is also recommended to query
_metato check if the subgraph has skipped over errors, as in the example:
"indexing_error", as in this example response:
startBlockdefined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called Grafting. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed.
Note: Grafting requires that the Indexer has indexed the base subgraph. It is not recommended on The Graph Network at this time, and developers should not deploy subgraphs using that functionality to the network via the Studio.
graftblock at the top-level:
graftblock is deployed, Graph Node will copy the data of the
basesubgraph up to and including the given
blockand then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph.