Back to blog
automationmcpai-toolsworkflowproductivity

Automate social media with MCP, Claude, and TryPost

Connect Claude or ChatGPT to TryPost via MCP. Generate dozens of posts in a single prompt and schedule them across networks without leaving the chat.

Paulo CastellanoPaulo Castellano
12 min read
Automate social media with MCP, Claude, and TryPost

Scheduling thirty social posts across LinkedIn, Instagram, and X used to be a multi-tab afternoon. The Model Context Protocol changes the shape of that work. This article walks through what MCP is, how to connect Claude Desktop or Cursor to TryPost in about ten minutes, and how to use a single prompt to draft and queue a month of content across networks. The technical setup is short. The interesting part is what happens to the workflow once you stop clicking.

What MCP is

MCP, short for Model Context Protocol, is a small specification from Anthropic for letting AI clients call external tools in a standard way. When an application exposes an MCP server, any compatible client (Claude Desktop, Cursor, Continue, Cline, ChatGPT desktop) can list its tools and call them with structured arguments.

For a social scheduler, that means Claude or ChatGPT can ask TryPost to do anything its own UI can: create a post, schedule it for a specific date, attach an image, drop it into a queue. Under the hood, MCP routes those calls through the same REST API the dashboard uses. The protocol spec lives at modelcontextprotocol.io, but a two-line summary is enough to start. Tools have names and parameters. The AI client picks one, fills in the parameters, and the server runs it.

Why this matters for social media operations

Most social calendars look the same in practice: thirty to a hundred posts a month, written once, then formatted and slotted into a scheduler one by one. Each network has its own quirks. X caps captions at 280 characters. Instagram allows up to 2,200. LinkedIn rewards line breaks; X penalises them. The writing rarely takes the longest. The repetitive copy-paste does.

An MCP-enabled workflow collapses that repetition into a single prompt and a review pass. Instead of opening the composer thirty times, you describe what you want once, let the AI client draft and queue it, then audit the results inside TryPost. The scheduler still runs the calendar. The AI client only types faster than you do.

Connect Claude Desktop to TryPost

The setup is one configuration file. On macOS it lives at ~/Library/Application Support/Claude/claude_desktop_config.json. On Windows it sits at %APPDATA%\Claude\claude_desktop_config.json. If the file does not exist yet, create it.

Add the TryPost MCP server inside the mcpServers block:

{
  "mcpServers": {
    "trypost": {
      "command": "npx",
      "args": ["-y", "@trypost/mcp-server"],
      "env": {
        "TRYPOST_API_TOKEN": "<your_trypost_token>",
        "TRYPOST_WORKSPACE_ID": "<your_workspace_id>"
      }
    }
  }
}

Generate the API token from your TryPost workspace settings under Developer. Paste it into the env block and restart Claude Desktop. A hammer icon should appear in the chat input. Opening it lists the TryPost tools the client can now call: create_post, schedule_post, list_channels, update_post, list_posts, plus a handful more.

Cursor uses the same shape at ~/.cursor/mcp.json. ChatGPT desktop supports MCP through its custom connectors panel. Tool names and arguments stay identical across clients, which is the protocol's main reason to exist.

One detail worth knowing up front: a TryPost API token only carries the permissions of the workspace member who created it. A token issued from a viewer-only seat cannot publish. If you intend to schedule batches, generate the token from a member with publish rights. A read-only token is still useful for analytics questions where you do not want the model touching the calendar.

Code editor on a dark screen, the kind of view that runs in the background while the AI client makes scheduling calls

How a batch run actually flows

Once Claude Desktop is connected, the scheduling loop has four steps.

The first is a single prompt that describes the whole batch. Something like "generate 30 LinkedIn posts about AI workflow automation for solo founders, schedule one per weekday at 9am Eastern starting Monday." Claude drafts the captions inline and shows them in the chat before any tool call runs.

The second is an editing pass inside the conversation. You ask for tweaks ("make post 4 less salesy", "rewrite post 12 with a real anecdote"), and the captions update in place. No tab switching, no copy-pasting between the model and the scheduler.

The third is the actual scheduling. When you give the green light, Claude calls create_post thirty times in sequence. Each call passes the caption, the network (linkedin), and an ISO 8601 scheduled_at timestamp. TryPost validates each one, writes it to the calendar, and returns the post ID. Claude collects the IDs and prints a summary.

The fourth is the spot-check inside TryPost. Open the calendar, drag two posts to better slots, edit one caption that lost its punch in the batch context, and the run is done.

The first batch through this loop usually takes longer than later ones. Once the prompts settle, scheduling thirty posts moves from a two-to-three-hour multi-tab session to roughly fifteen minutes of prompt-and-review.

Prompts that work as starting points

The prompts below are the ones we have seen perform reliably. They are written to be adapted: change the bracketed parts, point them at your workspace, and run.

A simple prompt for a single network:

Generate 20 Instagram caption posts for a coffee roastery brand.
Voice: warm, slightly nerdy about origin and roast date.
Schedule one per day at 8am Pacific starting tomorrow.
Use my brand kit's hashtag preset for hashtags.

A multi-network prompt where the same idea is rewritten per platform:

I have 12 product announcements for the [product name] beta. For each one,
draft three variants: a LinkedIn post (3 short paragraphs, no hashtags),
an X post (under 240 chars, one stat in the hook), and an Instagram
caption (story-driven, 2 to 4 paragraphs, 5 hashtags).
Schedule LinkedIn at 7am Eastern, X at 10am, Instagram at 6pm.
Spread one announcement per business day over the next 3 weeks.

A monthly batch prompt for an Instagram-heavy account:

Generate 60 Instagram posts for [niche]:
20 single-image posts (educational, list-format),
20 reel scripts (hook + 3 beats + payoff, 30 to 45 seconds),
20 story prompts (poll, question, behind the scenes).

Schedule across the next month:
posts on Mon/Wed/Fri at 10am Eastern,
reels on Tue/Thu at 6pm,
stories whenever you find a gap.

Use my brand voice. Avoid the words "obsessed", "literally", "vibe".
Show me the calendar overview when you're done.

The last line, asking for a calendar overview, matters more than it looks. Without it, the model finishes silently and any dropped slot stays invisible until you open TryPost. With it, the chat ends in a tidy table you can scan in seconds.

A maintenance prompt for mid-week edits:

Pull all my scheduled posts for the next 7 days from the [account] LinkedIn channel.
Find any that mention the old pricing. Rewrite them to reference the new $19/month
plan and reschedule them in the same slots.

That last shape, bulk find-and-replace across an active queue, is the kind of edit that previously meant twenty minutes of clicking through individual posts. Through MCP it becomes one prompt and a confirmation.

API and MCP are the same surface

Anything described above also runs as a direct API call. The MCP server is a thin wrapper over the REST API documented at docs.trypost.it/api-reference. Same endpoints, same payloads, same rate limits.

That parity matters because some workflows belong in code, not in chat. A scheduled cron job that pulls Shopify orders and queues a thank-you Instagram story is a script. So is a Zapier-style automation that turns new blog posts into social cross-posts. Use the API there. Use MCP when the work involves judgement, drafting, or one-off bulk operations where typing into a chat is faster than writing a script.

The dedicated build-on-MCP guide covering custom tools, batch operations, and error handling lives at docs.trypost.it/ai/introduction. It is worth a bookmark before wiring this into a production automation.

Quality controls

The model writes fine prose. It also writes AI-flavoured prose. Without review, a feed run entirely through MCP starts to read like every other AI-driven account on the network, which is bad for the algorithm and worse for the human reader.

A few patterns to scan for before approving a batch:

  • Generic openers ("In today's fast-paced world", "In the ever-evolving landscape of", "Let's dive into").
  • Three-item lists where two would do. Models default to the rule of three.
  • Caption hooks that are grammatical but say nothing concrete in the first line.
  • Identical post structures across the batch. Real writers vary rhythm; models default to the same skeleton.
  • Em dashes everywhere.

When those tells appear, two responses work. The faster one is to ask the model to redo the run with constraints ("redo with hooks under 8 words and no em dashes"). The slower but cleaner one is to edit inside the TryPost calendar, where every post is already attached to the right channel and slot.

For one-off posts, the free post generator covers the same ground without any setup. The bio generator and the rest of our free toolkit work the same way. MCP earns its setup cost when the work is bulk.

Brand kits are the other lever. If you have configured tone, voice, banned words, and channel-specific guidelines inside a TryPost brand kit, the MCP create_post tool references them automatically, and the output gets noticeably less generic. A banned-words list inside the brand kit is the most direct way to kill the AI tells listed above before they reach the calendar.

Phone resting on a notebook with a pen, the kind of low-tech check that catches what the AI gets wrong

When to keep humans in the loop

A useful rule: the more the post anchors a real claim, the more human review it needs. Three categories that should never go from prompt to publish without a person reading the output:

Factual claims about the company, the product, or the customer. Pricing, feature availability, customer names, anything that has a wrong answer. The model will draft confidently around an outdated detail if it has one cached.

Brand-specific jargon and tone. Internal team names, in-jokes, recurring formats. The model approximates rather than recalls these. If a brand voice has earned its consistency, treat the model output as a first draft, not a final.

Time-sensitive content. Anything tied to a launch date, an event, a moving deadline. Schedule windows shift; an autoscheduled post pinned to "next Thursday" is a bug waiting to happen.

Everything else (educational posts, evergreen tips, hook variations on the same topic) is comfortable territory for AI-first drafting with light human review.

Where the time goes

The first batch through MCP takes longer than later ones. Setup, prompt iteration, and learning what the model gets wrong all cost something on day one. Once the prompts settle, the savings compound.

A typical curve for an indie creator scheduling thirty posts a week looks like this. The legacy workflow (write in a doc, paste into the scheduler, format per network, set times, repeat) runs two to three hours of focused work per session. The same batch through MCP, after the prompts are tuned, runs about fifteen minutes of prompt-and-review.

The math gets bigger for agencies. Eight clients at a few hours each adds up to a full day of scheduling. The same workload through MCP collapses into a long lunch.

The trade-off worth naming: MCP rewards volume. If a single account posts five times a week to one network, the setup cost (small) and the prompt-engineering tax (real) do not pay back. Anyone scheduling more than thirty posts a week across more than two networks should at least try one batch session this way.

Pitfalls worth flagging

Four common mistakes that cost the most time when they appear:

Treating the first draft as the final draft. Batch output is a starting point. Skipping the review step ships thirty posts that all open with the same syntactic pattern, and engagement falls accordingly.

Scheduling identical copy across every network. Cross-posting saves time on writing but flattens the rhythm each network rewards. Hooks that work on LinkedIn fall flat on X; hashtag stacks that pull on Instagram look spammy on LinkedIn. The fix is to ask the model for variants per network in the same prompt, not the same caption duplicated.

Ignoring length limits. The MCP server returns errors when a post exceeds X's 280-character cap or Instagram's 2,200, but those errors arrive after generation. Bake the limit into the prompt up front. "X posts under 240 characters including hashtags" is a useful constraint to write once and reuse.

Letting the model pick the schedule unsupervised. Times specified in the prompt usually beat times the model invents on its own. Default model output gravitates to nine in the morning on every weekday, which performs worse than slots informed by your own analytics. Use the analytics tab to find the real best times for each channel, then bake those into the prompts.

Where to start

MCP plus an AI client is not a replacement for a scheduler. It is a faster surface for the same workflow a scheduler already runs. If your social calendar is one channel and a handful of posts a week, the pricing page plus the in-app composer is the right entry point, and MCP can wait.

If the calendar runs across three or more networks at thirty or more posts a week, the ten-minute config above is worth running once. Generate a TryPost API token, paste it into Claude Desktop, run a single batch, and the time-savings curve takes over from there. The full Build with AI guide at docs.trypost.it/ai/introduction covers the rest: webhooks for approval flows, custom tools for niche workflows, error handling when a network's API stutters. It reads better after the first successful batch than before.

Weekly tip

One social media tip a week

1 short, practical email every Tuesday. No filler, no generic listicles.