Introduction

This book is devoted to the Sui Move programming language, and it serves as an advanced guide. It targets developers who are looking to deepen their understanding of Sui Move, going beyond the basics to explore complex patterns, optimization techniques, and nuanced features of the language.

If you are new to Sui Move, we highly recommend that you go through the following resources:

Acknowledgements and Contributions

In the creation of this guide for the Sui Move programming language, significant contributions have been made by two exceptional developers, without whom this book would not have been possible in its current depth and scope.

I extend my sincere thanks to Porkbrain and Suficio for their collaboration and invaluable contributions in co-developing the advanced patterns in this guide.

In addition, I extend my gratitude to Damir, the creator of the Move Book and Sui by Example, whom we had the pleasure to work with side by side co-developing the Hot Potato Request pattern (i.e rolling hot potato) during the development of the Kiosk standard.

Current Development Stage

This book is in its initial phase. Content and examples are subject to significant refinement as we further develop the foundational concepts and incorporate insights from the developer community. Its focus is on providing a thorough and up-to-date guide on advanced Sui Move topics.

Loose topics

A list of loose topics:

Associated readings:

Deploying a contract

A couple of important things to keep in mind when deploying contracts.

To deploy a contract you need to keep its address as 0x:

[package]
name = "Request"
version = "1.6.0"

[dependencies.Sui]
git = "https://github.com/MystenLabs/sui.git"
subdir = "crates/sui-framework/packages/sui-framework"
# mainnet-1.15.1
rev = "08119f95e9ccdc926eae3fff8c95e50678f56aed"

[dependencies.Permissions]
local = "./../permissions"

[addresses]
ob_request = "0x"

Once we have deployed the contract successfully we need to insert the package ID into the move.toml to link the codebase to the on-chain smart contract. When we deploy the contract a package ID will be generated, in our case is 0xe2c7a6843cb13d9549a9d2dc1c266b572ead0b4b9f090e7c3c46de2714102b43. We therefore add this as the address:

[package]
name = "Request"
version = "1.6.0"

[dependencies.Sui]
git = "https://github.com/MystenLabs/sui.git"
subdir = "crates/sui-framework/packages/sui-framework"
# mainnet-1.15.1
rev = "08119f95e9ccdc926eae3fff8c95e50678f56aed"

[dependencies.Permissions]
local = "./../permissions"

[addresses]
ob_request = "0xe2c7a6843cb13d9549a9d2dc1c266b572ead0b4b9f090e7c3c46de2714102b43"

Upgrading a contract

To upgrade the contract we have to reset the address to 0x and invoke the call to upgrade the contract via the sui-cli. A new package will be generated. In our case the new version package is 0xadf32ebafc587cc86e1e56e59f7b17c7e8cbeb3315193be63c6f73157d4e88b9. We now relink the codebase by adding the new package ID in the published-at field and add back the original package ID in the addresses:

[package]
name = "Request"
version = "1.6.0"
published-at = "0xadf32ebafc587cc86e1e56e59f7b17c7e8cbeb3315193be63c6f73157d4e88b9"

[dependencies.Sui]
git = "https://github.com/MystenLabs/sui.git"
subdir = "crates/sui-framework/packages/sui-framework"
# mainnet-1.15.1
rev = "08119f95e9ccdc926eae3fff8c95e50678f56aed"

[dependencies.Permissions]
local = "./../permissions"

[addresses]
ob_request = "0xe2c7a6843cb13d9549a9d2dc1c266b572ead0b4b9f090e7c3c46de2714102b43"

Associated readings:

Events

The available sources cover the majority that there is to be known about on-chain events. An important note that we would like to cover is how events interact with package versioning. The rule that events follow is that events will be emitted from their package of origin. This means that if you use an off-chain event listener you will subscribe to the events of a smart contract by referring to its original package ID.

There is a caveat nonetheless. What happens when you introduce a new event in a newer version? These events will be generated by their original package ID, however this time they do not correspond to the original smart contract package ID, but the version in which this got introduced. This can complicate matters and therefore in general we recomment developers to keep track of the event sources and describe them in their documentation.

To ensure that ALL events including newly introduced events in later package versions, you should export a wrapper event struct in your original package:

#![allow(unused)]
fn main() {
struct Event<T: copy + drop> has copy, drop {
    event: T,
}
}

This ensures that any subsequently added evenet will inherit the original package ID as its outer type:

#![allow(unused)]
fn main() {
event::emit(Event { event: SomeEvent {}});
}

Associated readings:

Dangling Coins

In the Sui blockchain, an object with dynamic fields can be deleted, even if those fields are not deleted with it. Once the object is deleted, all its dynamic fields become unreachable for future transactions, in other words they become dangling objects. This is the case no matter if these field values are equipped with the 'drop' ability or not.

This is especially problematic if the object concerned has some real-world value such as Coin<T>. take the following example:

#![allow(unused)]
fn main() {
module examples::dangling_coin {
    use sui::coin;
    use sui::sui::SUI;
    use sui::dynamic_field as df;
    use sui::object::{Self, UID};
    use sui::test_scenario::{Self, ctx};

    const SOME_ADDRESS: address = @0x1;
    const USER: address = @0x2;

    struct SomeObject has key, store {
        id: UID,
    }

    fun burn_obj(obj: SomeObject) {
        let SomeObject { id } = obj;
        object::delete(id);
    }

    #[test]
    fun dangling_coin() {
        let scenario = test_scenario::begin(SOME_ADDRESS);
        let some_obj = SomeObject {
            id: object::new(ctx(&mut scenario)),
        };

        let sui_coins = coin::mint_for_testing<SUI>(10_000, ctx(&mut scenario));

        df::add(&mut some_obj.id, 1, sui_coins);

        burn_obj(some_obj);

        test_scenario::end(scenario);
    }

}
}

This code does in fact compile and the test passes. In other words, in this test case, 10_000 SUI coins have become unreachable. We recommend developers to always be extra cautious when burning object that have dynamic fields such that validations are put in place to prevent them from being burned in case valueable assets are held dynamically in it.

Advanced Patterns

A list of advanced patterns in Sui Move:

Associated readings:

Delegated Witness

The witness pattern is a fundamental pattern in Sui Move for building a permissioning system around the types of your smart contract. A certain contract might declare an Object<T> which uses the witness pattern to allow for the contract that creates T to maintain exclusivity when generating Object<T>.

Let's say that in contract A declares the following type and constructor:

#![allow(unused)]
fn main() {
module examples::contract_a {
    use sui::object::{Self, UID};
    use sui::tx_context::TxContext;

    struct ObjectA<phantom T: drop> has key, store {
        id: UID
    }

    public fun new<T: drop>(
        _witness: T, ctx: &mut TxContext
    ): ObjectA<T> {
        ObjectA { id: object::new(ctx) }
    }
}
}

Contract X can then declare a Witness type such that:

#![allow(unused)]
fn main() {
module examples::contract_x {
    use sui::transfer;
    use sui::tx_context::{Self, TxContext};

    use examples::contract_a;

    // Witness type
    struct TypeX has drop {}

    fun init(ctx: &mut TxContext) {
        transfer::public_transfer(
            contract_a::new(TypeX {}, ctx),
            tx_context::sender(ctx)
        )
    }
}
}

Given that only contract_b can instantiate TypeB, we guarante that Object<TypeB> can only be created by contract_b even though the generic object Object is declared in contract_a.

Using the Witness pattern for multiple types

The example above shows the power of the Witness pattern. However, this type of permissioning works when T has drop. What if we have a case in which SomeObject<T: key + store>? In this case, we can use a slightly different version of the witness pattern:

#![allow(unused)]
fn main() {
module examples::contract_b {
    use sui::object::{Self, UID};
    use sui::tx_context::TxContext;
    use ob_utils::utils;

    struct ObjectB<T: key + store> has key, store {
        id: UID,
        obj: T
    }

    public fun new<W: drop, T: key + store>(
        _witness: W, obj: T, ctx: &mut TxContext
    ): ObjectB<T> {
        // Asserts that `W` and `T` come from the same
        // module, via type reflection
        utils::assert_same_module<W, T>();

        ObjectB { id: object::new(ctx), obj }
    }
}
}

Now this allow us to use the our witness object to insert any object from our module with key and store in Object<T>:

#![allow(unused)]
fn main() {
module examples::contract_y {
    use sui::transfer;
    use sui::tx_context::{Self, TxContext};
    use sui::object::{Self, UID};

    use examples::contract_b;

    // Witness type
    struct Witness has drop {}

    struct TypeY has key, store {
        id: UID
    }

    fun init(ctx: &mut TxContext) {
        transfer::public_transfer(
            contract_b::new(Witness {}, TypeY { id: object::new(ctx) }, ctx),
            tx_context::sender(ctx)
        )
    }
}
}

Note that this pattern functions well for objects that wrap other objects with key and store. Under the hood we are using an assertion exported by the OriginByte utils module implemented as follows:

#![allow(unused)]
fn main() {
/// Assert that two types are exported by the same module.
public fun assert_same_module<T1, T2>() {
    let (package_a, module_a, _) = get_package_module_type<T1>();
    let (package_b, module_b, _) = get_package_module_type<T2>();

    assert!(package_a == package_b, EInvalidWitnessPackage);
    assert!(module_a == module_b, EInvalidWitnessModule);
}

public fun get_package_module_type<T>(): (String, String, String) {
    let t = string::utf8(ascii::into_bytes(
        type_name::into_string(type_name::get<T>())
    ));

    get_package_module_type_raw(t)
}

public fun get_package_module_type_raw(t: String): (String, String, String) {
    let delimiter = string::utf8(b"::");

    // TBD: this can probably be hard-coded as all hex addrs are 64 bytes
    let package_delimiter_index = string::index_of(&t, &delimiter);
    let package_addr = sub_string(&t, 0, string::index_of(&t, &delimiter));

    let tail = sub_string(&t, package_delimiter_index + 2, string::length(&t));

    let module_delimiter_index = string::index_of(&tail, &delimiter);
    let module_name = sub_string(&tail, 0, module_delimiter_index);

    let type_name = sub_string(&tail, module_delimiter_index + 2, string::length(&tail));

    (package_addr, module_name, type_name)
}
}

Delegated Witness

The delegated witness functions as an hybrid between the Witness and the Publisher pattern with the addition that it provides a WitnessGenerator which allows for the witness creation to be delegated to other smart contracts/objects defined in modules other than the creator of T.

In a nutshell, the differences between a Delegated-Witness and a typical Witness are:

  • Delegated-Witness has copy and it can therefore be easily propagated accross a stack of function calls;
  • Delegated-Witness is typed, and this in conjunction with the copy ability allows for the reduction of type-reflected assertions that are required to be perfomed accross the call stack
  • A Delegated-Witness can be created by Witness {}, so like the witness its access can be designed by the smart contract that defines T;
  • It can also be created directly through the Publisher object;
  • It can be generated by a generator object WitnessGenerator<T> which has store ability, therefore allowing for witness-creation process to be more flexibly delegated.

Note: This pattern enhaces the programability around object permissions but it should be handled with care, and developers ought to fully understand its safety implications. In addition, one can use this pattern without the WitnessGenerator<T>, rather this generator is in of itself a pattern that is built on top of the Delegated Witness.

From the OriginByte permissions package we have:

#![allow(unused)]
fn main() {
/// Delegated witness of a generic type. The type `T` can either be
/// the One-Time Witness of a collection or the type of an object itself.
struct Witness<phantom T> has copy, drop {}

/// Delegate a delegated witness from arbitrary witness type
public fun from_witness<T, W: drop>(_witness: W): Witness<T> {
    utils::assert_same_module_as_witness<T, W>();
    Witness {}
}

/// Creates a delegated witness from a package publisher.
/// Useful for contracts which don't support our protocol the easy way,
/// but use the standard of publisher.
public fun from_publisher<T>(publisher: &Publisher): Witness<T> {
    utils::assert_publisher<T>(publisher);
    Witness {}
}
}

We can now have two contract that do:

#![allow(unused)]
fn main() {
module examples::contract_c {
    use sui::object::{Self, UID};
    use sui::tx_context::TxContext;
    use ob_permissions::witness::{Witness as DelegatedWit};

    struct ObjectC<T: key + store> has key, store {
        id: UID,
        obj: T
    }

    public fun new<T: key + store>(
        _delegated_wit: DelegatedWit<T>, obj: T, ctx: &mut TxContext
    ): ObjectC<T> {
        ObjectC { id: object::new(ctx), obj }
    }
}

module examples::contract_d {
    use sui::object::{Self, UID};
    use sui::tx_context::TxContext;
    use ob_permissions::witness::{Witness as DelegatedWit};

    use examples::contract_c::{Self, ObjectC};

    struct ObjectD<T: key + store> has key, store {
        id: UID,
        obj_c: ObjectC<T>
    }

    public fun new<T: key + store>(
        delegated_wit: DelegatedWit<T>, obj_c: T, ctx: &mut TxContext
    ): ObjectD<T> {
        ObjectD {
            id: object::new(ctx),
            obj_c: contract_c::new(delegated_wit, obj_c, ctx)
        }
    }
}
}

In other words the authorization can be propagated throughout the call stack.

Witness Generator

TODO!

Associated readings:

Hot Potato Wrapper

In Sui, a hot potato is an object without abilities, and that therefore must be consumed in the same transactional batch that is has been created in (since it does not have drop ability it must be burned by the contract that declared its type). This is a very useful pattern because it allows developers to enforce that a certain chain of programmable calls ought to be executed, otherwise leading to the transaction batch failing. This pattern became extremely powerful especially since the introduction of Programmable Transactions.

Hot Potatoes are composable, which means that you can wrap them in other Hot Potatoes:

#![allow(unused)]

fn main() {
module examples::hot_potato_wrapper {
    use sui::test_scenario;

    struct HotPotato {}

    struct HotPotatoWrapper {
        potato: HotPotato
    }

    fun delete_potato_wrapper(wrapper: HotPotatoWrapper): HotPotato {
        let HotPotatoWrapper {
            potato,
        } = wrapper;

        potato
    }

    fun delete_potato(potato: HotPotato) {
        let HotPotato {} = potato;
    }

    #[test]
    fun try_wrap_potato() {
        let scenario = test_scenario::begin(@0x0);

        let potato_wrapper = HotPotatoWrapper {
            potato: HotPotato {},
        };

        let potato = delete_potato_wrapper(potato_wrapper);

        delete_potato(potato);

        test_scenario::end(scenario);
    }
}
}

Associated readings:

Rolling Hot Potato

As stated in the previous chapter, in Sui, a hot potato is an object without abilities, and that therefore must be consumed in the same transactional batch that is has been created in (since it does not have drop ability it must be burned by the contract that declared its type). This is a very useful pattern because it allows developers to enforce that a certain chain of programmable calls ought to be executed, otherwise leading to the transaction batch failing. This pattern became extremely powerful especially since the introduction of Programmable Transactions.

Following the introduction of Programmable Transactions the Rolling Hot Potato pattern as been introduced by Mysten Labs and Originbyte in collaboration during the development of the Kiosk.

Below follows a generic implementation which seves as a way of validating that a set of actions has been taken. Since hot potatoes need to be consumed at the end of the Programmable Transactions Batch, smart contract developers can force clients to perform a particular set of actions given a genesis action.

The module can be found in OriginByte Request package and consists of three core objects:

  • Policy<P> is the object that registers the rules enforced for the policy P, as well configuration state associated to each rule;
  • PolicyCap is a capability object that gives managerial access for a given policy object
  • RequestBody<P> is the inner body of a hot-potato object that contains the receipts collected by performing the enforced actions, as well as the metata associated to them as well as the policy resolution logic. RequestBody<P> is meant to be wrapped by a hot-potato object, but is itself a hot-potato.

Any developer can implement their logic on top of these generic objects in order to build their own chain of required actions. An example goes as follows:

#![allow(unused)]
fn main() {
module examples::request_policy {
    use sui::object::{Self, ID};
    use sui::tx_context::TxContext;
    use ob_request::request::{Self, RequestBody, Policy, PolicyCap};

    // === Errors ===

    const EPolicyMismatch: u64 = 1;

    // === Structs ===

    /// Witness for initating a policy
    struct AUTH_REQ has drop {}

    /// Rolling Hot Potato
    struct AuthRequest {
        policy_id: ID,

        // .. other fields ..

        inner: RequestBody<AUTH_REQ>
    }

    /// Construct a new `Request` hot potato which requires an
    /// approving action from the policy creator to be destroyed / resolved.
    public fun new(
        policy: &Policy<AUTH_REQ>, ctx: &mut TxContext,
    ): AuthRequest {
        AuthRequest {
            policy_id: object::id(policy),
            inner: request::new(ctx),
        }
    }

    public fun init_policy(ctx: &mut TxContext): (Policy<AUTH_REQ>, PolicyCap) {
        // Policy creation is gated using the Witness Pattern
        request::new_policy(AUTH_REQ {}, ctx)
    }

    /// Adds a `Receipt` to the `Request`, unblocking the request and
    /// confirming that the policy requirements are satisfied.
    public fun add_receipt<Rule: drop>(self: &mut AuthRequest, rule: Rule) {
        request::add_receipt(&mut self.inner, &rule);
    }

    // No need for witness protection as this is admin only endpoint,
    // protected by the `PolicyCap`. The type `Rule` is a type marker for
    // a given rule defined in an external contract
    public entry fun enforce<Rule: drop>(
        policy: &mut Policy<AUTH_REQ>, cap: &PolicyCap,
    ) {
        request::enforce_rule_no_state<AUTH_REQ, Rule>(policy, cap);
    }

    public fun confirm(self: AuthRequest, policy: &Policy<AUTH_REQ>) {
        let AuthRequest {
            policy_id,
            inner,
        } = self;
        assert!(policy_id == object::id(policy), EPolicyMismatch);
        request::confirm(inner, policy);
    }
}
}

We can now build a pipeline of required actions such that:

#![allow(unused)]
fn main() {
// ...

// User performs all required actions
policy_actions::action_a(&mut request);
policy_actions::action_b(&mut request);
policy_actions::action_c(&mut request);

// The request hot potato can now be safely destroyed
request_policy::confirm(request, &policy);
}

In other words, if the caller does not perform action A, B and C, the transaction will fail.

In order for these three actions to be required by the policy, their respective contracts need to export a function which has to be called by the owner of the policy:

#![allow(unused)]
fn main() {
// ...

// Admin enforces rules A, B and C
request_policy::enforce<RuleA>(&mut policy, &cap);
request_policy::enforce<RuleB>(&mut policy, &cap);
request_policy::enforce<RuleC>(&mut policy, &cap);
}

Actions can then be added from other contracts or modules:

#![allow(unused)]
fn main() {
module examples::policy_actions {
    use sui::tx_context::TxContext;
    use ob_request::request::Policy;

    use examples::request_policy::{Self, AuthRequest, AUTH_REQ};

    struct RuleA has drop {} // Witness and Type marker for Rule A
    struct RuleB has drop {} // Witness and Type marker for Rule B
    struct RuleC has drop {} // Witness and Type marker for Rule C

    public fun genesis_action(
        policy: &Policy<AUTH_REQ>, ctx: &mut TxContext,
    ): AuthRequest {
        request_policy::new(policy, ctx)
    }

    /// Performs a given action A
    public fun action_a(
        req: &mut AuthRequest,
    ) {
        // .. Performe some action ..

        request_policy::add_receipt(req, RuleA {})
    }
    
    /// Performs a given action B
    public fun action_b(
        req: &mut AuthRequest,
    ) {
        // .. Performe some action ..

        request_policy::add_receipt(req, RuleB {})
    }
    
    /// Performs a given action C
    public fun action_c(
        req: &mut AuthRequest,
    ) {
        // .. Performe some action ..

        request_policy::add_receipt(req, RuleC {})
    }
}
}

Associated readings:

Frozen Publisher

The Publisher Object in Sui confers authority to the publisher, in other words, to the one deploying the contract on-chain. With it developers can create priviledged entrypoints of which only the holder of the Publisher object can call. The Publisher along with its package module is essentially used to verify if a type T is part of a module or package associated with the Publisher object.

The Publisher pattern is a powerful permissioning pattern in Sui Move, and its use case can be seen in the Sui Display standard.

The Sui Object Display standard functions as a template engine, facilitating the management of how an object is represented off-chain through on-chain mechanisms. This standard allows for the integration of an object's data into a template string, offering flexibility in the selection of fields to include.

One challenge that arises with it however is when using wrapper types Wrapper<T>, in that it not possible for a type T to define its own display if its wrapped by Wrapper<T>. The is because the outer type take precedence over the inner type.

We introduce the idea of a FrozenPublisher which can be used by the wrapper module to allow for the publisher of T to define its own display of Wrapper<T>. In other words it allows the publisher of the type Wrapper to delegate to the type T. This way, the inner type T has the necessary degrees of freedom to define its display.

#![allow(unused)]
fn main() {
module ob_permissions::frozen_publisher {
    // ...

    struct FrozenPublisher has key {
        id: UID,
        inner: Publisher,
    }

    // ...

    public fun freeze_from_otw<OTW: drop>(otw: OTW, ctx: &mut TxContext) {
        public_freeze_object(new(package::claim(otw, ctx), ctx));
    }

    // ...
}
}

Say that want to create a wrapper type Wrapper<T> which allows other types to instantiate it:

#![allow(unused)]
fn main() {
module examples::export_display {
    use std::string;
    use sui::display::{Self, Display};
    use sui::object::{Self, UID};
    use sui::tx_context::TxContext;
    use ob_permissions::frozen_publisher::{Self, FrozenPublisher};
    use ob_permissions::witness::{Witness as DelegatedWitness};

    struct Witness has drop {}

    struct Wrapper<T: key + store> has key, store {
        id: UID,
        inner: T
    }

    public fun new<T: key + store>(inner: T, ctx: &mut TxContext): Wrapper<T> {
        Wrapper { id: object::new(ctx), inner }
    }
}
}

We can then add a function that lets the inner type witnesses export their inner display:

#![allow(unused)]
fn main() {
module examples::export_display {
    // ...

    // === Display standard ===

    /// Creates a new `Display` with some default settings.
    public fun new_display<T: key + store>(
        _witness: DelegatedWitness<T>,
        pub: &FrozenPublisher,
        ctx: &mut TxContext,
    ): Display<Wrapper<T>> {
        let display =
            frozen_publisher::new_display<Witness, Wrapper<T>>(Witness {}, pub, ctx);

        display::add(&mut display, string::utf8(b"type"), string::utf8(b"Wrapper"));

        display
    }
}
}

We can then create a FrozenPublisher and freeze it:

#![allow(unused)]
fn main() {
module examples::export_display {
    fun init(otw: TEST_WRAPPED_DISPLAY, ctx: &mut TxContext) {
        // ...

        frozen_publisher::freeze_from_otw(otw, ctx(&mut scenario));

        // ...
    }
}
}

This allows others developers that come along and create their inner types T and create their display for Wrapper<T> as shown in the test code below:

#![allow(unused)]
fn main() {
#[test_only]
module examples::test_wrapped_display {
    use std::string::utf8;
    use sui::object::UID;
    use sui::transfer;
    use sui::display;
    use sui::test_scenario::{Self, ctx};
    use ob_permissions::frozen_publisher::{Self, FrozenPublisher};
    use ob_permissions::witness;

    use examples::export_display;

    // One Time Witness
    struct TEST_WRAPPED_DISPLAY has drop {}

    // Witness for authentication
    struct Witness has drop {}

    struct InnerType has key, store {
        id: UID,
    }

    const WRAPPER_PUBLISHER_ADDR: address = @0x1;
    const INNER_PUBLISHER_ADDR: address = @0x2;

    #[test]
    fun create_wrapped_display() {
        let scenario = test_scenario::begin(WRAPPER_PUBLISHER_ADDR);

        frozen_publisher::freeze_from_otw(TEST_WRAPPED_DISPLAY {}, ctx(&mut scenario));

        test_scenario::next_tx(&mut scenario, INNER_PUBLISHER_ADDR);

        let dw = witness::from_witness<InnerType, Witness>(Witness {});
        
        let frozen_pub = test_scenario::take_immutable<FrozenPublisher>(&scenario);

        let inner_display = export_display::new_display(dw, &frozen_pub, ctx(&mut scenario));

        display::add(&mut inner_display, utf8(b"name"), utf8(b"InnerType"));
        display::add(&mut inner_display, utf8(b"description"), utf8(b"This is the inner display for Wrapper<InnerType>"));

        transfer::public_transfer(inner_display, INNER_PUBLISHER_ADDR);

        test_scenario::return_immutable(frozen_pub);
        test_scenario::end(scenario);
    }
}
}

Associated readings:

Transferable Dynamic-Fields

In the Sui blockchain, dynamic fields are a flexible feature allowing users to add, modify, or remove fields from blockchain objects on-the-fly.

These fields can be named arbitrarily and can store heterogeneous values, offering more versatility compared to fixed fields defined at the time of module publication. There are two types: 'fields' that can store any value but make wrapped objects inaccessible by external tools, and 'object fields' that must store objects but remain accessible by their ID.

One challenge that dynamic fields introduce is that when we attach dynamic fields to an object UID, if for any reason we want to burn the underlying object and move the dynamic fields to another object, we would have to perform 2n amount of remove and add operations, where n is the number of dynamic fields.

This becomes especially hard if the dynamic fields are protected with key objects from a myriad of different packages. In words it pretty much becomes impossible to move the fields in a single programmable transaction and puts a big strain on the client side to build such transactions as the client would have to know upfront all the packages it needs to interact with.

To fix for this challenge, we introduce transferable dynamic fields by allocating the dynamic fields not to the field id: UID of the object but to a special field of its own:

#![allow(unused)]
fn main() {
module examples::transferable_dfs {
    use sui::tx_context::TxContext;
    use sui::object::{Self, UID};

    struct MyObject {
        id: UID,

        // .. other fields

        /// We use this UID instead to store the dynamic fields
        dfs: UID
    }

    // ...
}
}

In Sui Move, we cannot transfer the id: UID to another object, as this is forbidden. Nonetheless, we can transfer UID that are not themselves the id field of the object. We can therefore have a burn function that looks like this:

#![allow(unused)]
fn main() {
module examples::transferable_dfs {
    use sui::tx_context::TxContext;
    use sui::object::{Self, UID};

    // ..

    // This function is just for illustration. In a real-world scenario
    // it would most likely have permissions around it.
    public fun burn(
        obj: MyObject,
    ): UID {
        let MyObject { id, dfs } = obj;

        object::delete(id);

        dfs
    }
}
}

We would then be able to move the dynamic fields to the new object.

Map-Reduce

Map Reduce is a pattern inspired in a Big Data pattern initially developed by the Hadoop framework and is a programming model that processes large data sets by dividing the work into two phases: the Map phase, which applies operations on individual or chunks of data, and the Reduce phase, which performs a final aggregation operation.

But how does this relate to Sui?

In Sui, operations on Single Writer Objects are fully parallelizable whereas operations on Shared Objects need to go through full consensus. With the Sui Map-Reduce pattern we can levage SWO transactions to add tens if not hundreds of thousands of objects to a Shared Object whilst having most transactions being parallelized. We do this by leveraging the Transferable Dynamic Fields pattern discussed previously.

Lets start with an example of two objects that represent the same abstraction, though one is private and the other one is shared:

#![allow(unused)]
fn main() {
module examples::map_reduce {
    use std::vector;
    use sui::tx_context::{Self, TxContext};
    use sui::object::{Self, UID};
    use sui::transfer;
    use sui::dynamic_object_field as dof;

    /// `PrivateWarehouse` object which stores Digital Assets
    struct PrivateWarehouse<phantom T> has key, store {
        /// `Warehouse` ID
        id: UID,
        total_deposited: u64,
        warehouse: UID,
    }

    /// `SharedWarehouse` object which stores Digital Assets
    struct SharedWarehouse<phantom T> has key, store {
        /// `Warehouse` ID
        id: UID,
        total_deposited: u64,
        warehouse: vector<UID>,
    }

    /// Creates a `PrivateWarehouse` and transfers to transaction sender
    public entry fun new_private<T: key + store>(
        ctx: &mut TxContext
    ) {
        let warehouse = PrivateWarehouse<T> {
            id: object::new(ctx),
            total_deposited: 0,
            warehouse: object::new(ctx),
        };

        transfer::transfer(warehouse, tx_context::sender(ctx));
    }

    /// Adds NFTs to `PrivateWarehouse` in bulk
    public entry fun add_nfts<T: key + store>(
        warehouse: &mut PrivateWarehouse<T>,
        nfts: vector<T>,
    ) {
        let len = vector::length(&nfts);
        let i = 0;

        while (len > 0) {
            let nft = vector::pop_back(&mut nfts);
            dof::add(&mut warehouse.warehouse, i, nft);

            len = len - 1;
            i = i + 1;
        };

        vector::destroy_empty(nfts);
    }

    /// Burns `PrivateWarehouse`s in builk, moves NFTs to `SharedWarehouse`
    public fun share_warehouse<T: key + store>(
        warehouses: vector<PrivateWarehouse<T>>,
        ctx: &mut TxContext
    ) {
        let shared_warehouse = SharedWarehouse<T> {
            id: object::new(ctx),
            total_deposited: 0,
            warehouse: vector::empty(),
        };

        let len = vector::length(&warehouses);
        let i = 0;

        while (len > 0) {
            let wh = vector::pop_back(&mut warehouses);
            let PrivateWarehouse { id, total_deposited: new_deposit, warehouse: wh_ } = wh;

            object::delete(id);
            shared_warehouse.total_deposited = shared_warehouse.total_deposited + new_deposit;
            vector::push_back(&mut shared_warehouse.warehouse, wh_);

            len = len - 1;
            i = i + 1;
        };

        vector::destroy_empty(warehouses);
        transfer::share_object(shared_warehouse);
    }
}
}

We can now instantiate SWO warehouses in parallel calling new_private, add any non-fungible asset in parallel by calling add_nfts. This is the "Map" part in the "Map-Reduce". We then call share_warehouse which will burn all individual private warehouses, and aggregate all its NFTs into a shared warehouse.

On-chain Events

With Sui's on-chain storage economics, it is economically feasible to record events on-chain. We could therefore expose an inner API for our program events, to the programs themselves:

#![allow(unused)]
fn main() {
struct EventLogs has key, store {
    id: UID,
}

struct EventAKey has copy, store, drop {}
struct EventBKey has copy, store, drop {}

struct EventA has copy, store, drop {}
struct EventB has copy, store, drop {}

fun emit_event_a(
    logs: &mut EventLogs,
) {
    let event = EventA {};
    append_event(logs, EventAKey {}, event);
    event::emit(event);
}
    
fun emit_event_b(
    logs: &mut EventLogs,
) {
    let event = EventB {};
    append_event(logs, EventBKey {}, event);
    event::emit(event);
}

fun append_event<EK: copy + store + drop, E: copy + store + drop>(
    logs: &mut EventLogs,
    key: EK,
    event: E
) {
    let logs: &mut TableVec<E> = df::borrow_mut(&mut logs.id, key);
    table_vec::push_back(logs, event);
}
}

Advanced Data Structures

A list of advanced data structures in Sui Move:

Associated readings:

Dynamic Vectors

In the Sui blockchain, dynamic fields are a flexible feature allowing users to add, modify, or remove fields from blockchain objects on-the-fly.

These fields can be named arbitrarily and can store heterogeneous values, offering more versatility compared to fixed fields defined at the time of module publication. There are two types: 'fields' that can store any value but make wrapped objects inaccessible by external tools, and 'object fields' that must store objects but remain accessible by their ID.

Dynamic fields are great because they serve as an abstraction for unbounded object scalability and extendibility. An example of scalability is the object type TableVec which allows us to create an arbitrarily long vector.

#![allow(unused)]
fn main() {
struct TableVec<phantom Element: store> has store {
    /// The contents of the table vector.
    contents: Table<u64, Element>,
}
}

Runtime hit

One trade-off when using dynamic fields to scale your objects in size is that your application will suffer a runtime hit. This is fine for most cases, but for perfomance critical applications you can use a dynamic vector from the OriginByte library:

#![allow(unused)]
fn main() {
struct DynVec<Element> has store {
    vec_0: vector<Element>,
    vecs: UID,
    current_chunk: u64,
    tip_length: u64,
    total_length: u64,
    limit: u64,
}
}

This abstraction combines the best of both worlds, the runtime performance of static fields, and the scalability of dynamic fields. In a nutshell, DynVec loads the tip of the entire vector into a static field vector, allowing push_back and pop_back operations to be more perfomant. When popping an element from the vector, when the the sub-vector tip gets exhausted we load the next chunk to the static vector.

#![allow(unused)]
fn main() {
public fun pop_back<Element: store>(
    v: &mut DynVec<Element>,
): Element {
    // This only occurs when it has no elements
    assert!(v.tip_length != 0, 0);

    let elem = if (v.tip_length == 1) {
        remove_chunk(v)
    } else {
        pop_element_from_chunk(v)
    };

    elem
}
}

Conversely, when we push elements to the back of the vector, when the sub-vector tip gets full, we move it to a dynamic field and instantiate a new static vector.

#![allow(unused)]
fn main() {
public fun push_back<Element: store>(
    v: &mut DynVec<Element>,
    elem: Element,
) {
    // If the tip is maxed out, create a new vector and add it there
    if (v.tip_length == v.limit) {
        insert_chunk(v, elem);
    } else {
        push_element_to_chunk(v, elem);
    };
}
}

To create a dynamic vector you can simply call the empty constructor function and passing a limit which defines the capacity of each vector chunk.

#![allow(unused)]
fn main() {
/// Create an empty dynamic vector.
public fun empty<Element: store>(limit: u64, ctx: &mut TxContext): DynVec<Element> {
    DynVec {
        vec_0: vector::empty(),
        vecs: object::new(ctx),
        current_chunk: 0,
        tip_length: 0,
        total_length: 0,
        limit,
    }
}
}