Welcome to ElfinGuard’s documentation!¶
ElfinGuard protocols aim to decentralize the web. Its first released protocol, the Access-Control protocol, is designed for the authors and their target audience. The authors use smart contract to program who can and how to access the contents, and CDN providers build neutral channels, through which mass multimedia content can be delivered smoothly from the authors to the target audience. It links the authors and audience directly and frictionlessly, without any intervention or censorship.
No access control, no ownership. In the Web3.0 era, ElfinGuard’s Access-Control Protocol is an important infrastructure to bring data ownership back to the creators and users.
Note
The ElfinGuard protocols are still under active development. The documentation here is not stale now.
Contents¶
ElfinGuard Access-Control Protocol¶
Today users can store information with the help of Decentralized File Hosting. However, decentralizing the storage is not enough. In order to ensure full protection, users need an access-control protocol.
We aim to add this missing link by combining the power of decentralized storage with advanced access control. ElfinGuard is revolutionizing the way users protect their data.
The Value of Access-Control¶
Web 2.0 relies heavily on file hosting in tandem with content delivery networks (CDNs) to deliver multimedia contents to users. Authors spend time and effort to create the content, and it costs them storage and bandwidth resources to deliver it to their target users. Authors and Delivery services earn revenue from end users, either directly (fees) or indirectly (advertisement). The Web 2.0 era has generated multiple centralized methods to directly control access to these contents; businesses can not profit if users can extract any content without permission.
Web 3.0 has seemingly solved the problem of centralized file storage; solutions such as IPFS, Filecoin, Swarm and Arweave offer decentralized file storage. However, this technology only incentivizes storage providers. There is no earning opportunity for authors and deliverers by using decentralized file storage solutions as there are no access-control methods.
Blockchains build an internet of value, smart contracts enable programmable value, but can the traditional internet of information be built with blockchains as well? Can smart contracts facilitate information accessing? ElfinGuard answers both these questions.
ElfinGuard’s access-control protocol allows authors to program who, how and when the usesrs can access content on decentralized storages. This technology will pave the way for new blockchain-oriented business models.
ElfinGuard’s access-control protocol has the following innovative features:
- Hardware-based protection; making it more reliable than law, morality, discipline, or human-controlled permissions.
- Easily programmable with solidity; authorization is based on smart contracts and on-chain states.
- No single-point of failure; content delivery and authorization are decentralized.
This article introduces the protocol step-by-step. Each step will refine the granularity of the solution, ultimately providing a comprehensive understanding of this innovative technology.
Enforced Permissions on Public Data¶
While there isn’t a common definition of Web 3.0, most people believe it will empower users to control their own data, enjoy greater privacy and prevent centralized platforms from monopolizing the acquisition, recommendation, and censorship of their information. Thus, the decentralization of data storage and data delivery are key cornerstones of Web 3.0.
Today, decentralized storage is easily accessible. The solutions, such as IPFS, have made significant strides in enabling users to store and deliver the content. For example, users can pin a video on IPFS with multiple utilities. However, delivering the content efficiently and smoothly to all viewers still remains challenging. This is where centralized platforms provide the most value as they ensure smooth delivery to a mass audience.
As content becomes more sophisticated, it requires larger file size to contain high-resolution pictures, articles with tables & figures, and large audio & video content, which are expensive to transfer, even more so than to store them. This is because the cost of bandwidth is higher than that of storage. Content delivery networks (CDNs) can help reduce the bandwidth cost by caching data in edge nodes that are close to the target audience.
In the Web 3.0 era, users still need CDNs. However, CDN providers only serve websites that pay them. Providers use the “same origin policy” to decide whether to serve a request. If website B contains an image or video from website A, the CDN provider will not serve B by providing A’s contents.
Decentralized storage schemes, on the other hand, do not explicitly reveal a file’s origin or constrain access to files based on the “same origin policy.” Thus, users can only encourage CDN providers to serve permissionless data to the public, i.e., ignore which website or App is requesting the data, which severely limits the use cases for authors.
This is where ElfinGuard comes in. ElfinGuard allows authors to create enforceable permissions on public content in decentralized storages and ensures only their target audience can access that content.
Encryption enclaves plays a critical role in helping authors secure their content and reach the target audience . Encryption enclaves connect to each other and share one symmetric key for encryption/decryption. This key is generated inside one encryption enclave and broadcasted to others through secure channels, and hardware enclaves protect the symmetric key. An original file, together with its target audience’s addresses, is encrypted into one file via an encryption enclave. The encrypted file is then stored on IPFS.
When a user requests the original file, a CDN provider fetches the encrypted data and uses an encryption enclave to decrypt it. The enclave will then check if the requester’s address is listed as the file’s target audience. If so, it decrypts the file and the CDN provider sends the original file to the requester.
End-to-end encryption¶
On Web 2.0 platforms users can update original files without any encryption. As a result, these files can be vulnerable to being leaked to the public or malicious attackers, even if users request a platform to keep them private. Most Platforms are required by law and company rule to have an active privacy disclosure, but the uploaded files must be stored in databases, which use human-controlled permissions to limit access. Software vulnerabilities and human errors or unfaithfulness can still result in data leaks, despite a platform’s best efforts to enact privacy disclosure.
Moreover, operators on a CDN provider’s backend can view any content, regardless of the access constraint of the actual website it serves. In our above example, the CDN provider can keep a copy of the decrypted file against the author’s will.
The solution to these issues is upgrading encryption enclaves to “re-cryptors”. A re-cryptor takes an encrypted file as an input, decrypts it into the original content, and finally re-encrypts it with another distinct key.
When an author wants a re-cryptor to encrypt a file before uploading it to IPFS, he or she must encrypt the original file beforehand with a shared secret that is only known to them and the re-cryptor. When a user requests a re-cryptor to decrypt a file from IPFS, he or she will get a file encrypted with the shared secret, not the original file.
During the whole process of storing and delivering, only the inside logic of the re-cryptor enclave can access the original content, and the usage of hardware enclaves ensures it will never be leaked. Therefore, other than the author and the target audience, no other humans from the platform or CDN provider can view the original content.
Authorization contracts¶
Traditionally, CDN providers require authorization servers to provide sophisticated dynamic access control. Authorization servers are typically created by a CDN provider’s customer. For example, if Bob requests to view a URL of website A, the CDN provider will ask the website A’s authorization server whether Bob is authorized to view the URL. The website A may allow Bob to view it if he is a VVIP, or make Bob wait until Friday if he is a VIP, or deny Bob if he is neither. However, in the Web 3.0 era, authors need a more streamlined way for dynamic access control as running an authorization server is too complicated and time consuming for ordinary authors.
One solution to this problem is to use on-chain Smart Contracts for dynamic access control. For instance, an author may choose to restrict access to their content to only holders of specific NFTs or ERC20 tokens. A re-cryptor then uses an eth_call to invoke a smart contract’s function with the requestor’s address as the argument, if it returns “true” then the requestor is granted access. The file uploaded to IPFS specifies which contracts to call and how to call them, rather than a static set of target audience. This approach provides authors with a more efficient and effective way to manage dynamic access control.
Multi-Grant from Authorities¶
When a re-cryptor requires eth_call for authorization, there’s a potential for ‘Witch Attacks’ to occur. To query eth_call, users need a blockchain node to provide RPC endpoints. A node can be run by a user, but in most cases the user will rent a node from a Node-as-a-Service (NaaS) provider.
Although the re-cryptors’ internal data and logic are safe under the protection of enclaves, the input data the re-cryptor receives through eth_call may be incorrect – this can be for various reasons. A CDN provider may receive incorrect information from a NaaS provider due to incorrect configuration of the re-cryptor’s DNS and TLS settings. A node run by a CDN provider may also return incorrect information if it is hacked because of vulnerabilities. Any CDN provider may have security problems; thus, trusting one single CDN provider is problematic for content authors.
To address this issue, the solution is to separate the task of authorization out from the re-cryptors and use dedicated Authorizers to query eth_call. These authorizers are run by several trustworthy authorities and have strong security measures and good reputations.
To further protect the symmetric key, we use a “multi-grant” scheme which is like that of “multi-signature.” The content creator specifies an “N” number of authorities and a threshold number “M” (M < N). Before uploading, the re-cryptor must encrypt the original file with all “N” grant codes. In tandem, before the re-cryptor decrypts a file for a requestor, the requestor must collect at least “M” grant codes from the specified authorities.
All Authorizers run by the same authority have the same “grant root”. For each individual file, an authorizer derives a unique grant code from the grant root, after it ensures the requestor is allowed access to the file. The derived grant root is generated inside enclaves and shared amongst enclaves, ensuring that even an operator employed by the authority cannot view the root value. The grant codes are sent from authorizers to re-cryptors through secure channels which prevent any third party from viewing them. To ensure grant codes are sent only to trustable enclaves, authorizers always check with the re-cryptors before opening secure channels.
(The encryption/decryption algorithm for “multi-grant” will be introduced in a separate article.)
Multi-zone to mitigate risks of enclaves’ vulnerabilities¶
Enclaves are integral to a system’s security, but it still may be compromised if the underlying hardware has vulnerabilities. Despite there have been no real attacks reported on CPUs with hyperthreading disabled, the risk of security breaches still exists.
Currently, enclaves can be implemented using Intel’s SGX & TDX, AMD’s SEV-SNP, ARM’s TEE, and AWS’s Nitro. SGX is the most mature and mainstream solution while the others are rapidly evolving. Enclaves are divided into different zones, and each zone uses the same technology. For example, all enclaves based on Intel SGX are in the same zone.
The probability of all zones being simultaneously exploited by hackers is extremely low. However, an author can further protect his or her file by splitting it into multiple parts, each of which is protected by a different enclave zone.
For example, an author divides a file into three parts: Part #1 is protected by SGX enclaves, Part #2 by SEV-SNP enclaves, and Part #3 by AWS’s Nitro enclaves. This approach requires the audience to retrieve all three parts to recover the full original file.
The Big Picture¶
ElfinGuard Access-Control Protocol uses smart contracts to manage file accessibility with following functions:

The author uses smart contracts to program file accessibility and uploads these files to re-cryptors run by CDN providers. The re-cryptors encrypt files and store them in decentralized storage services.
When the audience wants to view a file, they must connect to a re-cryptor that can retrieve the file out from the decentralized storages. The re-cryptor will request the authorizer to grant file decryption. When the re-cryptor decrypts the file, the audience can download it.
The content author writes smart contracts to specify what audience behaviors or states on blockchains will be qualified to view the file. In most cases the ‘behavior’ is a payment (audience pays the author directly with ERC20 tokens) and the ‘state’ is ownership of certain NFTs.
To close the circle, an authorizor uses the blockchain’s RPC nodes to determine whether to grant the decryption based on specified behaviors or states. Once these processes are completed and verified, the author’s intended audience can view the content.
Elfin Authorizer¶
Blockchain-based Authorizing¶
In the Web 2.0 era, account-based access control is very common. You must register to a website or App. Then it grants you a “level”, such as Guest, VIP, SVIP, etc. With different levels, you have different service quality. For example, some high-quality contents are only available to VIP, and SVIP account can view a new TV show much earlier than a guest account.
As we come into the Web 3.0 era, authorization based on blockchain accounts are more and more popular. Many websites and Apps support “login with Web3 wallets”. But just login is not enough. We also want to use on-chain state to assign different levels to the accounts. When a website decides whether to provide a service to a specific account, it may want to consider:
- Historical contract interaction. Did it even send transactions to call some contracts?
- Historical events. Did it ever receive some ERC20 token or NFT?
- The latest state of the account. Does it own some ERC20 token or NFT? Does it belong to some DAO?
These conditions can be AND-ed/OR-ed together to form more sophisticated conditions.
Node-as-a-service (NaaS) providers runs plenty of nodes of different blockchains. Accessing on-chain state is a regular task for them. It would be very easy for them to add a new service: Elfin authorizers.
Elfin authorizers can assign access permissions to accounts according to such sophisticated conditions. A website or App can outsource the permissioning task to them.
Elfin authorizers are oracles which endorse facts by signing them using their private keys. This article introduces the different facts they can sign and how to express sophisticated conditions using these signed facts.
Primitives provided by authorizers¶
Endorse historical contract interaction¶
A historical transaction can be described by such a solidity struct:
struct TxInfo {
uint chainId; // which chain did this log happen on?
uint timestamp; // when did this log happend at?
uint txid; // the transaction's hash id
address fromAccount;
address toAccount;
uint value;
bytes callData;
}
If a transaction really happend in a blockchain’s history, the Elfin authorizer generates a TxInfo struct to describe it. Then it uses abi.encodePacked to serialize this struct into raw bytes and sign the keccak256 hash of the raw bytes. In such a way it endorses this transaction using its private key.
Endorse historical events¶
Events are implemented using EVM logs. A EVM log can be described by such a solidity struct:
struct LogInfo {
uint chainId; // which chain did this log happen on?
uint timestamp; // when did this log happend at?
address sourceContract; // which contract generates this log?
bytes32[] topics; // a log has 0~4 topics
bytes data; // a log has variable-length data
}
If a transaction really happend in a blockchain’s history, the Elfin authorizer generates a LogInfo struct to describe it. Then it uses abi.encodePacked to serialize this struct into raw bytes and sign the keccak256 hash of the raw bytes. In such a way it endorses this event using its private key.
Endorse the outputs of eth_call¶
A requestor asks the Elfin authorizer to query the eth_call endpiont of Web3 RPC servers. The 16~35 bytes of the call data used for eth_call must equal the authorizer’s EVM address calculated from its private key. The authorizer collects the related information about this eth_call to fill the following struct:
struct EthCallInfo {
uint chainId;
uint timestamp;
address fromAccount;
address targetContract;
bytes4 functionSelector;
bytes outData;
}
Then it uses abi.encodePacked to serialize this struct into raw bytes and sign the keccak256 hash of the raw bytes. In such a way it endorses the outputs of eth_call using its private key.
Granting secrets to account owners¶
A requestor asks the Elfin authorizer to query the eth_call endpiont of Web3 RPC servers. The 16~35 bytes of the call data used for eth_call must equal the authorizer’s EVM address calculated from its private key. The from-account for eth_call must be the requestor’s EVM address (a personal_sign signature is required to ensure this). The authorizer collects the related information about this eth_call to fill the following struct:
struct SecretSeed {
uint chainId;
bytes4 functionSelector;
address targetContract;
bytes outData;
}
Then it uses abi.encodePacked to serialize this struct into raw bytes and calculate the keccak256 hash of the raw bytes. With its private key and this hash, it generates a VRF (verifiable random function) output and a proof. The VRF output is a secret that only qualified requestor can get.
For granting secrets, authorizers also supports a recryptor mode, which requires the request comes from a recryptor’s enclave. In the recryptor mode, the raw bytes’ sha256 hash is used for VRF, instead of keccak256 hash.
Write authorization contract to express sophisticated conditions¶
Suppose we want to provide a file-sharing service only to such qualified accounts:
- Someone who is explicitly marked as qualified member by a superuser
- Someone who has called a contract and received a given ERC20 token in the recent two months
The isQualified function of the following Membership contract can check if msg.sender is a qualified account:
import "@openzeppelin/contracts/access/Ownable.sol";
struct Signature {
uint8 v;
bytes32 r;
bytes32 s;
}
contract Membership is Ownable {
mapping(address => bool) public isMember;
mapping(uint => bool) public forbiddenFiles;
address immutable public erc20Token;
address immutable public calledContract;
bytes32 constant private TransferEvent = keccak256("transfer(address,address,uint256)");
string constant private PREFIX = "\x19Ethereum Signed Message:\n32";
constructor(address _erc20Token, address _calledContract) Ownable() {
erc20Token = _erc20Token;
calledContract = _calledContract;
}
function setMembership(address addr, bool ok) public onlyOwner {
isMember[addr] = ok;
}
function getHash(TxInfo calldata t) internal pure returns (bytes32) {
bytes32 h = keccak256(abi.encodePacked(t.chainId, t.timestamp, t.txid, t.fromAccount, t.toAccount, t.value, t.callData));
return keccak256(abi.encodePacked(PREFIX, h));
}
function getHash(LogInfo calldata l) internal pure returns (bytes32) {
bytes32 h = keccak256(abi.encodePacked(l.chainId, l.timestamp, l.sourceContract, l.topics, l.data));
return keccak256(abi.encodePacked(PREFIX, h));
}
function isQualified(address authorizer, TxInfo calldata txInfo, Signature calldata txSig,
LogInfo calldata logInfo, Signature calldata logSig) public view returns (bool) {
if(isMember[msg.sender]) return true;
require(authorizer==ecrecover(getHash(txInfo), txSig.v, txSig.r, txSig.s), "invalid-txSig");
require(authorizer==ecrecover(getHash(logInfo), logSig.v, logSig.r, logSig.s), "invalid-logSig");
uint twoMonthAgo = block.timestamp - 60 days;
return txInfo.toAccount==calledContract && txInfo.fromAccount == msg.sender &&
logInfo.topics[0]==TransferEvent && logInfo.topics[2]==bytes32(bytes20(msg.sender)) &&
twoMonthAgo < txInfo.timestamp && twoMonthAgo < logInfo.timestamp;
}
}
Before calling isQualifed, a requestor must query the authorizer to get TxInfo and LogInfo, which will be used as the arguments to call isQualified. The first argument must be the authorizer’s address, which is used to ensure the TxInfo and LogInfo were really generated by the same authorizer.
When the authorizer endorses the EthCallInfo struct after calling isQualified, the requestor has a proof that he is a qualified account.
Now, we want to upgrade this file-sharing service to support encryption and decryption. The files are decrypted with symmetric keys which is only known to the qualified accounts. Any qualified account can use the symmetric key of current time to encrypt and upload files. But different accounts have different permissions in decryption:
- Someone who is explicitly marked as qualified member by a superuser, can decrypt all the files.
- Someone who has called a contract and received a given ERC20 token in the recent two months, can only decrypt the files which are encrypted in recent five days.
We add a new function getSecret to the Membership contract:
function setForbidden(uint fileid, bool foridden) public onlyOwner {
forbiddenFiles[fileid] = foridden;
}
function getSecret(address authorizer, uint fileid, TxInfo calldata txInfo, Signature calldata txSig,
LogInfo calldata logInfo, Signature calldata logSig, uint shareTime) public view returns (uint, uint) {
if(forbiddenFiles[fileid]) return (0, 0);
if(isMember[msg.sender]) return (shareTime, fileid);
require(authorizer==ecrecover(getHash(txInfo), txSig.v, txSig.r, txSig.s), "invalid-txSig");
require(authorizer==ecrecover(getHash(logInfo), logSig.v, logSig.r, logSig.s), "invalid-logSig");
uint twoMonthAgo = block.timestamp - 60 days;
bool qualified = txInfo.toAccount==calledContract && txInfo.fromAccount == msg.sender &&
logInfo.topics[0]==TransferEvent && logInfo.topics[2]==bytes32(bytes20(msg.sender)) &&
twoMonthAgo < txInfo.timestamp && twoMonthAgo < logInfo.timestamp;
if(qualified && block.timestamp - 5 days < shareTime && shareTime < block.timestamp + 1 hours) {
return (shareTime, fileid);
}
return (0, 0);
}
The argument shareTime is the time when this file was encrypted and shared. The fileid is a unque id assigned to each shared file. The superuse can disable the sharing of individual files by calling setForbidden using fileid. If several files belong to a single file logically, such as the segments of the same m3u8 file, or the files of a multi-part archive, it is suggested that they share the same fileid.
A requestor asks the authorizer to call getSecret function for secret-granting. The authorizer will fill a SecretSeed struct and use it to generate a VRF output. This output is used as the symmetric key for encryption and decryption.
The RPC Endpoints¶
An authorizer provides four RPC endpoints to support the mentioned primitives. All these endpoints returns a JSON object, with the following fields:
- IsSuccess: If the RPC finishes successfully
- Message: When IsSuccess equals true, it’s an empty string. When IsSuccess equals false, it’s the string explaining the reason
- Result: for granting secret, this is the from-account’s address and the VRF output (in recryptor mode this output is encrypted); for the other endpoints, this is the raw bytes to be signed.
- Proof: for granting secret, this is the VRF proof; for the other endpoints, this is the signature
- Salt: Only used in the recryptor mode for granting secret. It’s first eight bytes is the current timestamp (little endian) and the other bytes are random number generated by hardware RNG.
- PubKey: Only used in the recryptor mode for granting secret. It’s the authorizer’s public key.
In recryptor mode, the recryptor calculates a secret with its private key and the authorizer’s PubKey, and then uses this secret and the returned Salt to decrypt the returned Result to get VRF output.
Endorse historical contract interaction¶
The RPC endpiont’s URL is like below:
/eg_tx?hash=<transaction-hash-id>
The hash parameter is hex format and starts with “0x”.
Endorse historical events¶
The RPC endpiont’s URL is like below:
/eg_log?contract=<contract-address>&block=<blockhash>&topic0=<hex-string>&topic1=<hex-string>&topic2=<hex-string>&topic3=<hex-string>
The parameters topic0`~`topic3 are used to filter out one single EVM log generated by the contract in the specified block. Some or all of them can be omitted, as long as exactly one EVM-log is got after filtering.
All these parameters are hex format and start with “0x”.
Endorse the outputs of eth_call¶
The RPC endpiont’s URL is like below:
/eg_call?contract=<contract-address>&data=<calldata>&from=<from-account-address>
All these three parameters are hex format and start with “0x”.
Granting secrets to account owners¶
The RPC endpiont’s URL is like below:
/eg_grantcode?time=<unix-timestamp>&contract=<contract-address>&datalist=<calldatalist>&nth=<index-of-calldata>&sig=<from-account-signature>&recryptorpk=<pubkey-of-recyrptor>&out=<outdata>
The time parameter is a decimal integer. All the other five parameters are hex format and start with “0x”.
The recryptorpk and out parameters are only used in recryptor mode, where the requestor is the recryptor enclave. The recryptorpk presents the public key of the recryptor and we are in recryptor mode if it is specified. In recryptor mode, the body of the http request must be the attestation report of the recryptor enclave. Authorizers check this report to ensure the request is sent from an SGX enclave.
calldatalist is a list of calldata for different authorizers to query eth_call. nth specifies which entry in calldatalist is the calldata for this authorizer. Each entry of calldatalist is a hex string and commas are used to separate the entries.
The 20 bytes of calldata[16:36] will be overwritten by the authorizer’s EVM address, before the authorizer uses calldata to query eth_call. Thus, the called function can read this EVM address as its first argument.
The from-account’s address can be recovered from the sig parameter. When the sig is omitted, the from account has zero address. The sig is generated using MetaMask’s personal_sign. The signed text is:
To Authorizer: time=<unix-timestamp>, contract=<contract-evm-address>, data=<keccak256-of-datalist-with-0x-prefix>
In recryptor mode, if the recryptor wants to push a file to cloud, it uses the out parameter to specifiy the output of eth_call. Then the authorization will not query eth_call. Instead, it uses the out parameter as the output of eth_call.
Load Balance and Authentication¶
The enclave implementation of Elfin authorizer is designed to run on a single machine. The service provider can run many Elfin authorizer enclaves and use a reverse proxy to distribute the requests to them.
The provider can only provide services to a limited set of customers, such as the recryptors of a CDN vendor who has paid. The RPC endpoints provided by the enclave do not support authentication.
If the provider would like to use some authentication methods (basic auth, API keys, etc), it can use the reverse proxy to deploy them. The basic auth header and the API key parameter must be removed before forwarding the request to the enclaves.
Rate Limit¶
The Elfin authorizer does not support rate limit. The service provider can implement rate limit in the reverse proxy.
Elfin Recryptor¶
A new function in content delivery¶
CDN providers run recryptors to serve the authors and audience.
The authors and audience connect to recryptor enclaves through https links directly. Even the CDN providers do not know what are transferred through the TLS tunnel.
The recryptor acts as a gRPC client to get IPFS data from a RPCX server run by the same CDN provider.
Serve the author¶
Start an encryption task¶
A GET request should be sent to such a URL:
/eg_getEncryptTaskToken?fileId=<hex-string>&sig=<hex-encoded-signature>
It returns a json string with three fields:
- encrypt_task_token, an encryption task token (base58-encoded)
- recryptorsalt: a recryptorsalt (base64-encoded)
- pubkey: the recryptor’s pubkey (base64-encoded)
The fileId is a unique id the author assigned to his files. It can be sha256sum or IPFS CID, or anything the author likes.
The author endorses the fileId with a signature sig, which is generated using MetaMask’s personal_sign. The signed text is:
To Recryptor: fileId=<hex-string-with-0x-prefix>
The recryptor salt is a true random number generated by the recryptor enclave. It will be used in decryption.
Get encrypted parts from authorizers¶
A POST request should be sent to such a URL:
/eg_getEncryptedParts?token=<base58-string>
The encryption task token is given as the token parameter. The body of the request is a json string with following schema:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "ElfinGuard Encryption Guide",
"description": "Instructions Used by Recryptors to Decrypt Files",
"type": "object",
"properties": {
"chainid": {
"type": "string",
"description": "a hex string indicating the target chain's ID"
},
"contract": {
"type": "string",
"description": "the EVM address of the authorization contract"
},
"function": {
"type": "string",
"description": "the signature of the function to be called"
},
"threshold": {
"type": "integer"
"description": "the minimum number of authorizers required to decrypt this file",
"minimum": 1,
"exclusiveMinimum": false
},
"authorizerlist": {
"type": "array",
"items": {
"type": "string",
"description": "the domain name of an authorizer"
},
"minItems": 1,
"uniqueItems": true
},
"outdata": {
"type": "string"
"description": "the expected outdata from eth_call",
}
}
}
It returns json-encoded byte string list. Each entry of the list is an encrypted parts from an authorizer.
Encrypt file chunks¶
A POST request should be sent to follow URLs:
/eg_encryptChunk?token=<base58-string>&index=<chunk-index>
/eg_encryptChunkOnServer?token=<base58-string>&index=<chunk-index>
A file is looked as a list of 256KB chunks. It must be encrypted chunk by chunk.
The body of the request is the bytes of a chunk. The encryption task token is given as the token parameter, and index shows the index of this chunk in the file’s chunk list.
The encryptChunk RPC returns the encrypted chunk. The encryptChunkOnServer RPC writes the encrypted chunk to the server’s local file at a proper offset indicated by the index parameter.
Using repeatly requests, you can fill the fully-encrypted file at the client side (encryptChunk) or at the server side (encryptChunkOnServer).
Serve the audience¶
Start a decryption task¶
The RPC endpiont’s URL is like below:
/eg_getDecryptTaskToken
The body of the request is a json string with the follow schema:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "ElfinGuard Decryption Guide",
"description": "Instructions Used by Recryptors to Decrypt Files",
"type": "object",
"properties": {
"chainid": {
"type": "string",
"description": "a hex string indicating the target chain's ID"
},
"contract": {
"type": "string",
"description": "the EVM address of the authorization contract"
},
"function": {
"type": "string",
"description": "the signature of the function to be called"
},
"threshold": {
"type": "integer"
"description": "the minimum number of authorizers required to decrypt this file",
"minimum": 1,
"exclusiveMinimum": false
},
"authorizerlist": {
"type": "array",
"items": {
"type": "string",
"description": "the domain name of an authorizer"
},
"minItems": 1,
"uniqueItems": true
},
"outdata": {
"type": "string"
"description": "the expected outdata from eth_call",
}
"encryptedparts": {
"type": "array",
"items": {
"type": "string",
"description": "base64-encoded shamir part encrypted with the grantcode from the authorizer"
},
"minItems": 1,
"uniqueItems": true
}
"calldatalist": {
"type": "array",
"items": {
"type": "string",
"description": "the calldata sent to the authorizer as the calldata to call the contract address. calldata[36:68] must equal the fileid"
},
"minItems": 1,
"uniqueItems": true
}
"signature": {
"type": "string"
"description": "a signature signed by the requestor",
},
"timestamp": {
"type": "integer"
"description": "the UNIX timestamp when the requestor signs the signature",
},
"recryptorsalt": {
"type": "string"
"description": "random bytes generated by the recryptor",
},
"fileid": {
"type": "string"
"description": "a unique id for this file",
}
}
}
The fileid parameter was specified by the author and was used in calling the getEncryptTaskToken endpoint. The recryptorsalt parameter was got by calling the getEncryptTaskToken endpoint.
The requestor must properly construct the signature and calldatalist to prove he is qualified to view the file.
This endpiont returns a json string with following fields:
- decrypt_task_token: a decryption task token (base58-encoded)
- pubkey: the recryptor’s pubkey (base64-encoded)
Get the decrypted file¶
The RPC endpiont’s URL is like below:
/eg_decryptChunk?token=<base58-string>&index=<unique-integer>
/eg_getDecryptedFile?token=<base58-string>&path=<file-path-on-ipfs>&size=<integer>
The decryptChunk endpoint decrypts the byte string given in the POST body and returns the decrypted plaintext. The getDecryptedFile endpoint decrypts a file on decentralized storage, and it supports resuming breakpoints during downloading, using the Content-Range Header.
A client-side file can be encrypted by encryptChunk and then decrypted by decryptChunk. The index parameter used by decryptChunk must be the same as the one used when calling encryptChunk. The encryptChunk/decryptChunk endpoints are sued in some use cases where files are shared through some traditional methods, such as email and ftp, instead of decentralized storages.
The decryption task token is given as the token parameter. The path pamameter specifies a file on IPFS from the RPCX/gRPC server. The size parameter specifies the size of the returned data. If the decrypted data is larger than it, then the tail is truncated and not returned.
Load Balance and Authentication¶
Recryptors are decentralized in a geographic way, because CDN vendors run recryptors on edge nodes. The requestor query a CDN vendor for the nearest recryptor node. Or, the requestor send request to the CDN vendor, which will redirect the request to a nearest recryptor node.
Clients must connect directly to an enclave without any HTTP proxy, to ensure the TLS channel can prevent third parties (including the CDN vendor) from stealing the original file.
Recryptors do not support normal ways for authentication (basic auth, API keys, etc). Instead, to start an encryption task or a decryption task, the requestor must sign a signature to prove her identity.
A decryption/encryption task token can only be read by the same enclave that wrote it. So a requestor must stick to the same enclave during the same decryption/encryption task.
Proxy to authorizers¶
CDN vendors’s recryptors have a high volume of requests sending to the authorizers. Usually, CDN vendors would like to pay the authorizers for better service. And the authorizers will provide dedicated servers (with special domain name) or dedicated API keys to the paid customers.
Recryptors do not know the dedicated servers or API keys. CDN vendor must run a HTTP proxy which forward the recyrptors’ requests to authorizers.
Rate Limit¶
The recryptor does not support rate limit itself. Instead, it can connect to an external rate limiter run by the CDN vendor as a microservice.
Recryptor Coordinator¶
A CDN provider runs a coordinator which coordinates all its recryptors and the backend storage engine. A coordinator has the following functions:
Wallet-based login¶
You can login to the coordinator and get a session id.
First, you get a random hex string through the following RPC endpoint.
/eg_getNonce
Then, you sign this hex string using personal_sign and use the signature to call the following RPC endpoint:
/eg_getSessionID?sig=<hex-encoded-signature>&nonce=<hex-encoded-nonce>
The nonce parameter used to call eg_getSessionID must the returned value of eg_getNonce.
A session ID is returned to you, which can be used in later requests.
Assign a nearby recryptor¶
You can ask the coordinator to assign a nearby recryptor to you, for an encryption or decryption task.
/eg_getRecryptor?session=<session-id>
The domain name of the assigned recryptor will be returned.
Gateway to decentralized storages¶
You can get a non-encrypted file from decentralized storage (such as IPFS):
/eg_getFile?path=<path-of-the-file>&session=<session-id>
The format of path depends on the decentralized storage solution. For IPFS, the path is a CID followed by the file’s path in the Elfin directory.
This RPC helps you get the readme.txt file and the config.json file in the Elfin directory. It may limit the size of the returned file and/or download speed.
Upload an immutable directory¶
You can request the coordinator to upload an immutable directory onto IPFS by posting a FormData (multipart/form-data).
/eg_upload?session=<session-id>&recryptor=<domain-name-of-recryptor>
The format of the FormData is introduced in the elfindirectory:FormData for upload section.
The recryptor parameter gives the domain name of the recryptor who run encryptChunk for the encrypted files in the immutable directory.
This RPC endpoint will return the CID of the immutable directory.
Proxy to Elfin Authorizers¶
You can request the coordinator to assign a proxy to you, which can forward your request to an Elfin authorizer.
Note that it needs to input chain name
/eg_getProxy?session=<session-id>&chain_name=<chain-name>
Usually, end users pay CDN providers for higher download speed. However, end users do not directly pay authorizers. Instead, CDN providers will pay the authorizers. To better serve its customers, a CDN provider can build a proxy to forward customers’ requests (/tx, /log and /calldata) to Elfin authorizers.
Elfin Directory¶
Arrange Permissioned Contents on Decentralized Storages¶
Note
Currently this document only specifies immutable directory on IPFS. Specifications for other decentralized storages will come soon.
IPFS supports Immutable Filesystem, and a IPFS URL can actually point to a directory.
Sometimes we want to share several files as a whole through. For example:
- HLS playlist (.m3u8) file and the segment files
- A html file and its resources (images, videos, css, js)
- A markdown (.md) file and its images
- A video file and several subtitle files for it
As a convention, in the IPFS immutable directory:
- The entry file must be named as “index”, such as index.m3u8, index.html, index.md
- There can be a “readme.md” file to briefly introduce the files in this directory. It can have only text contents or include base64-encoded images.
- There must be a “config.json” file to guide viewers in decrypting some of the files in this directory. It is allowed that some of the files are not encrypted.
Metadata’s schema¶
The schema of the “config.json” file is as follows:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "ElfinGuard Recryptor Configuration",
"description": "Configures the ElfinGuard Recryptor to Decrypt Files",
"type": "object",
"properties": {
"version": {
"description": "The Version of ElfinGuard Protocol",
"type": "integer"
"minimum": 0,
"exclusiveMinimum": false
},
"introduction": {
"type": "string"
"description": "briefly introduce the contents in this directory",
},
"keywords": {
"type": "string"
"description": "some keywords which help search engines indexing this elfin directory, seperated by commas",
},
"author": {
"name": {
"type": "string"
"description": "The author's name",
},
"homepage": {
"type": "string"
"description": "The home page of the author (https or ipns)",
},
"evm_address": {
"type": "string"
"description": "The author's EVM address",
}
},
"files": {
"type": "array",
"description": "provide information for decrypting some of the files in the directory",
"items": {
"type": "object",
"properties": {
"filename": {
"type": "string"
"description": "the name of a file in this directory",
},
"fileid": {
"type": "string"
"description": "a unique id for this file",
},
"size": {
"type": "integer"
"description": "the size of the original file before encryption",
},
"recryptorsalt": {
"type": "string"
"description": "random bytes generated by the recryptor",
},
"decryptionguides": {
"description": "Each blockchain has a dedicated decryption guide",
"type": "array",
"items": {
"type": "object",
"properties": {
"chainid": {
"type": "string",
"description": "a hex string indicating the target chain's ID"
},
"contract": {
"type": "string",
"description": "the EVM address of the authorization contract"
},
"function": {
"type": "string",
"description": "the signature of the function to be called"
},
"threshold": {
"type": "integer"
"description": "the minimum number of authorizers required to decrypt this file",
"minimum": 1,
"exclusiveMinimum": false
},
"authorizerlist": {
"type": "array",
"items": {
"type": "string",
"description": "the domain name of an authorizer"
},
"minItems": 1,
"uniqueItems": true
},
"outdata": {
"type": "string"
"description": "the expected outdata from eth_call",
},
"encryptedparts": {
"type": "array",
"items": {
"type": "string",
"description": "base64-encoded shamir part encrypted with the grantcode from the authorizer"
},
"minItems": 1,
"uniqueItems": true
}
}
}
}
}
}
}
}
}
FormData for upload¶
To store a Elfin directory to IPFS, you must first submit the files that need encryption to the server side using the encryptChunk endpoint of the recryptor. After they are ready, you can upload the files in Elfin directory using FormData (multipart/form-data). A FormData object should be created using the append method, with the following arguments:
- name: the full name of the file. A Elfin directory can contain subdirectories. So the full name may contain “/”.
- value: For a non-encrypted file, this is its Blob content. For an encrypted file, this is a hex string representing its recryptorsalt.
UTXO Adapter¶
UTXO Chains’ Adaptor to ElfinGuard Authorizer¶
UTXO-based blockchains, such as Bitcoin, Bitcoin Cash, Litecoin, Dogecoin, do not have a logging scheme like EVM. UTXO adaptor derives an EVM transaction containing one EVM log from a UTXO-based transaction, such that ElfinGuard’s authorizers can know what happened on these chains.
The deriving rules are as follows:
- The first output must be an OP_RETURN output whose first data element is “EGTX”; otherwise, this transaction is NOT a derivable transaction.
- The second data element of OP_RETURN are mapped to the EVM Log’s source contract address. If it is less than 20 bytes, it will be zero-padded. If it is longer than 20 bytes, it will be tail-truncated.
- The 3rd~6th non-emtpy data elements are mapped to LOG1/LOG2/LOG3/LOG4 instructions’ topics, i.e., the solidity events’ topics. Each one will be coverted to a big-endian bytes32 by zero-padding and tail-truncation. Empty data elements pushed by OP_FALSE are ignored and not mapped as topics.
- The following data are concatenated together to form the LOG instructions’ data.
- The confirmation count as a uint256. Yes, this number can be changed as time goes by. It is not a constant number. It will be -1 if the transaction cannot be found in mempool or the block history any more.
- The P2PKH/P2SH outputs’ recipient address (20 bytes) and value (12 bytes) as a uint256 array (uint256[]). The value is multiplied by 10**10 such that its unit is changed from satoshi to wei.
- The P2PKH/P2SH inputs’ sender address (20 bytes) and value (12 bytes) as a uint256 array (uint256[]). The value is multiplied by 10**10 such that its unit is changed from satoshi to wei.
- The other data elements of OP_RETURN as a bytes array (bytes[]).
- The first P2PKH/P2SH outputs’ recipient address is taken as the to-address of the EVM transaction
- The first P2PKH/P2SH inputs’ sender address is taken as the from-address of the EVM transaction
Currently, we only implement an adaptor for Bitcoin Cash in this repo.
We use virtual block number for these EVM logs. Different UTXO Adapter may generate different blocks for the same block number. The mempool is checked every 5 seconds and any new derivable transactions will be packed to a new virtual block. The mined blocks are also checked to find new derivable transactions.
The name of these blockchains (Bitcoin, Bitcoin Cash, Litecoin, Dogecoin) are prefixed with “virtual” and then mapped to bytes32 as their EVM chainId.
It is recommended that the source contract address (20 bytes) is calculated as RIPEMD160(SHA256(URI)). The URI is controlled by the authorizing contract’s developers.
It is recommended that the derived logs are viewed as solidity’s anonymous events, because always attaching the same 32 bytes to OP_RETURN is a waste.
The virtual EVM blocks’ attributes are left empty or zero except two:
- Size: it is reused to store the time when this virtual block is build.
- GasUsed: it is reused to store the main chain’s height scanned by this adaptor when this virtual block is build.
These attributes depends on the local time of the adaptor’s machine and the timing it sees the mempool’s transactions. So they will differ from one adaptor to another adaptor.
We don’t want authorizers disagree on timestamp. So the local timestamp is store in ‘Size’ which is ignored by authorizers. And the blocks’ timestamps are left as zero.
BCH Payment Judger¶
An online judger for BCH stochastic payments¶
PaymentJudger is a third-party enclave to support stochastic payment on Bitcoin Cash mainchain. It serves the inactive payees which cannot check on-chain status and broadcast transactions. Maybe the payee is a smart contract which cannot be active, or maybe it is a real person who cannot get online, such as an author who uses ElfinHost Access-Control protocol to publish videos.
The payer and PaymentJudger follow these steps:
- The payer sign a Bitcoin Cash transaction that pays the payee and has an OP_RETURN output contains the possibility that this transaction would be broadcasted.
- The enclave receives this transaction, verifies it using testmempoolaccept, and sends the payer a signature endorsing the payer, the payee, the amount, the possibility, etc.
- The enclave generate a VRF output based on the transaction’s hashid and decide whether to broadcast this tx based on the specified possibility. The VRF output and the proof are also sent to the payer, no matter whether the tx is broadcasted or not.
The payer-signed Bitcoin Cash transaction must be a “derivable” transaction from which the UTXO adaptor can derive an EVM transaction containing one EVM log. Further more, the first OP_RETURN output’s must have at least seven pushed data elements and the seventh must be a two-byte integer indicating the probability of the payment.
PaymentJudger will serialize and sign the derived EVM log in the same way as the authorizer does: serialize the following solidity struct using abi.encodePacked and sign it using personal_sign.
struct LogInfo {
uint chainId; // which chain did this log happen on?
uint timestamp; // when did this log happend at?
address sourceContract; // which contract generates this log?
bytes32[] topics; // a log has 0~4 topics
bytes data; // a log has variable-length data
}
According to the UTXO adaptor’s specification, the data field of LogInfo can be decoded in following way:
(
uint256 _confirmations,
uint256[] memory outputs,
uint256[] memory inputs,
bytes[] memory otherData
) = abi.decode(logInfo.data, (uint256, uint256[], uint256[], bytes[]));
The possibility of payment is encoded in otherData[0], which is the seventh data element of the first OP_RETURN output.
PaymentJudger will return the following information to payee:
- Prob16, the possibility of payment. It’s in the range of [0, 65535]
- Rand16: a peusdo random number from beta[30:]. It’s in the range of [0, 65535]
- VrfAlpha: the hashid of the transaction,
- VrfBeta: the VRF output,
- VrfPi: the VRF proof,
- LogInfo: serialized struct LogInfo,
- LogSig: a signature endorsing LogInfo,
If Rand16 is less than Prob16, this transaction will be broadcasted by PaymentJudger.