This is the vmst.io Moderation Process. In the spirit of full disclosure we have decided to make it open to public viewing. The purpose is to guide our staff in their decisions to take action on a member.
There may be times where moderators and administrators deviate from this document as needed, specifically when new issues arise that have not been covered by existing processes. Ultimately our staff may take action on any account they deem necessary to protect the health and safety of the instance and other users.
If you feel you have been treated unfairly by our staff, you may file an appeal or contact an administrator.
A full list of active staffers can be found here
There are two levels of moderation staff with different abilities:
All staff should be intimately familiar with the Rules
Moderators are vmst.io community members with additional permissions to take action on any reported or observed user account in violation of our rules. They also have the ability to help maintain the trending functions of the Explore page.
Our moderators are volunteers who contribute their time, energy and knowledge.
Administrators have the ability to de-federate entire instances for continued violation or bad actions against our members, or our infrastructure. They also act as senior moderators and community leaders.
The Administrators are the final authority on any appeals, moderation issues or rule interpretations.
When a post is reported, all members of the staff receive an email alert and an alert message is posted to the "vmst.io Moderation" room in Slack. Both alerts include links to the report in the Mastodon UI.
Reports about members of the staff cannot be seen by that staff member in the Mastodon UI.
For whatever action Moderators take on a report, in addition to the action being logged in the Mastodon system, they should post the action to the thread in Slack as an indicator that they've followed up. If you need to loop in another staffer for an assist do so via @mention within Slack.
All internal discussion of reports should be considered confidential.
The exact source of a report should never be revealed to a user, unless there are extenuating legal circumstances, and then only at the discretion of an administrator.
Communication of disciplinary actions against user accounts should be done outside of the public view, such as through Mastodon's built-in warning/appeals system, email, private mention or other secure means. All private communication regarding actions should be documented in appropriate internal channels.
Our staff will use discretion in moderating our users, and we expect the same from our members who have been subject to moderator actions. Users who push back, or question in public, moderation decisions made against an account should be directed to an administrator for a public response, if it is warranted.
Should the situation escalate towards harassment of staff, the member will be asked to leave vmst.io.
All Rules are not created equal.
At a high level, reports will generally fall into a few different catagories.
- "I don't like it"
- Illegal Content
- Rule Violation
- Something Else
These are based on the default options for the Mastodon UI, and are set by the user who files the report. Keep in mind that third-party clients may have fewer options based on their usage of the API.
Moderators can determine if a report is classified incorrectly and then reclassify it, but failure to properly classify a report is not grounds to dismiss the report. We should always err on the side of trusting the user making the report that there is a problem to address.
The Mastodon UI will direct users who choose "I don't like it" to mute or block the user. If this is insufficient, and a report is still generated, it must be classified differently.
Clear and obvious SPAM, should be actioned by suspending the offending account.
Examples of clear and obvious might be new accounts, or accounts from other instances that:
- Post links to external sites for pornography, cryptocurrency, gambling, or illegal content.
- Post in excess, particularly if they appear automated.
- Use excessive mentions or hashtags.
Please refer to our rules on Advertising & Excessive Promotion for items which might run afoul of this rule but are less clear and obvious.
We have a zero tolerance policy for violations of our bigotry or extremism policies. If you find users posting openly hateful content you are authorized to immediately suspend the user’s account.
If the account has just signed up or their time on the instance has only been used for such content, reach out to an Administrator for immediate deletion of the account profile.
Please refer to our rules on NSFW content for items which might run afoul of this rule.
Sexual content might generally fall into one of three catagories:
Absolutely under no circumstance is 18+ content allowed without a content warning. Users should not use their vmst.io accounts for posting sexually explicit or pornographic content. Illegal content, such as CSAM or content that violates the consent of others will be actioned by account deletion and referral to the appropriate law enforcement agency.
Brands who create accounts in violation of our policies will generally be asked to migrate their account to a different instance. These types of violations should be handled by Administrators.
There are three main types of reports on Mastodon
- vmst.io user vs vmst.io user (intra-instance)
- vmst.io user vs federated instance user (inter-instance)
- federated instance user vs vmst.io user (inter-instance)
When our users report our users, we are only accountable to ourselves. In this case it’s likely that someone will be unhappy with the results of the reporting and subsequent action. We must do what is right and in the best interest of the community.
While members can appeal, they also have the ability to migrate their account to another instance if the resulting moderation actions do not satisfy them.
When reports involve other instances, we must always be mindful that there is a moderation team involved in both places. Our team is responsible for the enforcement of our rules on our instance, and other instances may have rules and interests that are different from our own.
When our members report another instance user, we have the ability to prohibit the remote user from interacting with our members, but we cannot control what the moderation team of the other instance does with their report. If we take action to limit their account and it’s abilities, those only impact it’s interaction with OUR users.
If there are entire instances that are engaging in behaviors that are putting our users at risk, or where the user-base is consistently in violation of our rules, then we may take action to defederate the entire instance.
When users of another instance report our users, we must be mindful of two things:
- Are they violating OUR instance rules?
- Are they making us look like a bunch of jerks?
If we have members who are continually getting reported by other instances, those administrators and moderators may eventually take action to defederate our instance.
If what our members are doing are not in violation of our rules, but are deemed inappropriate for the community of another instance, then that’s their prerogative to take action to limit that user from interacting with their users, just as we can do the same.
If you see reports that may rise to this level, please raise the issue in Slack so we can discuss.
There are a few options available for any reported content.
- Mark as resolved (Dismiss)
- Delete posts
- Limit user
- Suspend user
- Send a warning
- Freeze user
- Set content Sensitive
The most common types of moderation events are expected to be Mark as Resolved or Send a Warning to the member.
Freezing or suspending local users should be done when there are either escalating or egregious violations of site policy.
All default user moderation decisions (aside from Mark as Resolved) will notify the affected user by email. Some of the options accessible under Custom have the ability to disable the user notification settings, but sending email notices is still considered standard practice.
Please refer to the following chart as a quick reference for appropriate actions to take against a user account based on the type of user.
All major rule catagories have warning presets available for Moderators to leverage when actioning on an account. Moderators are expected to use these presets and only send a custom message to a user through the admin portal if discussed with an administrator.
Marking as resolved generally means you’re dismissing the report, because it’s determined to be not worthy of moderator action.
Many times this is the appropriate action where an individual can take action to mute or block another account if it's content that they don't personally like.
For vmst.io users, no mark is left against the user's profile. The reporter, the account being reported, or the federated instance (if applicable) are not notified of the decision to dismiss the report.
Sending a warning to a local user is preferable when they run afoul of our rules, but don't require any specific corrective action.
If a user has had multiple warnings issued, especially for the same type of behaviors, then additional corrective actions may be required.
Set content Sensitive could be useful for posts that should probably have a content warning but were not initially set (NSFW type materials, or other things that may be offensive to other members.)
Warnings can be sent in addition to taking this action.
Freezing a local user prevents a user from posting with their account but does not hide or delete any previous posts. Useful for folks who need a time out, although there’s currently not a time based mechanism for enforcement.
Freezing a user does not prevent them from exporting their data and moving to another instance. This is preferable for long term user removal, where the content does not need to be removed but the user is no longer compatible with the vmst.io community.
Limit user prevents the user from being able to post with public visibility, meaning anything they do in the future is hidden from anyone who’s not already following.
This is useful for federated users, to limit their access to vmst.io members through direct interaction or the federated timeline. It's use against vmst.io members directly is not generally effective.
Suspending a user is equivalent to a ban, as this prevents any interaction with the account and sets their account to be deleted by the server in 30 days unless other actions are taken.
If you’re unsure of what to do, ask for a second set of eyes. If no one else is available then you’re empowered to take action against the account to freeze it if you feel that it will stop a further escalation of the issue, until we can further assess the issue and make a determination as to what action is appropriate long term.
Users can appeal actions of moderators. In such cases actions of moderators will be reviewed by an administrator, and after additional discussion or deliberation the appeal may be accepted or denied.
While we greatly appreciate folks who have paid subscriptions to vmst.io, or made one-time contributors, their financial contributions do not bring special favor to the moderator staff. At no time should this factor into the decision to take action against a member.
Subscribers are free to cancel their contribution to vmst.io at any time. Administrators may, at their own discretion, refund subscriptions for members who have their account access restricted or terminated.