Curated Claude Code catalog
Updated 07.05.2026 · 19:39 CET
01 / Skill
stickerdaniel

linkedin-mcp-server

Quality
9.0

This MCP server enables AI agents to programmatically interact with LinkedIn, allowing them to access profiles, search for jobs and people, manage messages, and retrieve company information. It's ideal for automating professional networking tasks or gathering public LinkedIn data responsibly.

USP

Unlike direct API integrations, this MCP server uses a real browser session, reducing the risk of detection while providing extensive LinkedIn interaction capabilities. It offers flexible installation options and detailed control over brow…

Use cases

  • 01Automating LinkedIn profile data extraction
  • 02AI-driven job search and application assistance
  • 03Managing LinkedIn messages with AI agents
  • 04Gathering company insights from LinkedIn
  • 05Automating connection requests

Detected files (2)

  • .agents/skills/triage-reviews/SKILL.mdskill
    Show content (2520 bytes)
    ---
    name: triage-reviews
    description: Fetch PR review comments, verify each against real code/docs, fix valid issues, commit and push
    disable-model-invocation: true
    argument-hint: '[PR number]'
    ---
    
    # Triage PR Review Comments
    
    Fetch all review comments on the current PR, verify each finding against real code, fix valid issues, and push.
    
    ## Phase 1: Gather Comments
    
    1. Determine the PR number:
       - Use `$ARGUMENTS` if provided
       - Otherwise: `gh pr view --json number --jq .number`
    
    2. Fetch ALL comments (reviewers post in multiple places):
       ```
       gh api --paginate repos/{owner}/{repo}/pulls/{pr}/reviews
       gh api --paginate repos/{owner}/{repo}/pulls/{pr}/comments
       gh api --paginate repos/{owner}/{repo}/issues/{pr}/comments
       ```
    
    3. Extract unique findings — deduplicate across Copilot, Greptile, and human reviewers. Group by file and line.
    
    ## Phase 2: Verify Each Finding
    
    For EVERY finding, verify against real code before accepting or rejecting:
    
    1. **Read the actual code** at the referenced file:line
    2. **Check if the issue still exists** — it may already be fixed in a later commit
    3. **Verify correctness** using:
       - Code analysis (read surrounding context, trace call paths)
       - Run `btca resources` to see what's available, then `btca ask -r <resource> -q "..."` for library/framework questions
       - Web search for API behavior, language semantics, or CVEs
    4. **Classify** each finding:
       - **Valid** — real bug, real gap, or real improvement needed
       - **False positive** — reviewer misread the code, outdated reference, or style preference
    
    ## Phase 3: Fix & Ship
    
    1. Fix all **Valid** findings
    2. Run the project's lint/test commands (check CLAUDE.md for exact commands)
       - If lint/tests fail, fix the failures before committing
       - If a failure cannot be fixed automatically, skip that fix and report it as **Valid (unfixed)** in the Phase 4 table
    3. `git add` only changed files, `git commit` with message:
       ```
       fix: Address PR review feedback
    
       - <one-line summary per fix>
       ```
    4. Push: `gt submit` (or `git push` if not using Graphite)
    
    ## Phase 4: Report
    
    Present a final summary table of ALL findings with verdicts:
    
    | # | Source | File:Line | Finding | Verdict | Reason |
    |---|--------|-----------|---------|---------|--------|
    
    ## Notes
    
    - Never dismiss a finding without reading the actual code first
    - If unsure, err toward "Valid" — it's cheaper to fix than to miss a bug
    - For library/API questions, always use btca or web search — don't guess
    
  • .mcp.jsonmcp_server
    Show content (198 bytes)
    {
      "mcpServers": {
        "greptile": {
          "type": "http",
          "url": "https://api.greptile.com/mcp",
          "headers": {
            "Authorization": "Bearer ${GREPTILE_API_KEY}"
          }
        }
      }
    }
    

README

LinkedIn MCP Server

PyPI CI Status Release License

Through this LinkedIn MCP server, AI assistants like Claude can connect to your LinkedIn. Access profiles and companies, search for jobs, or get job details.

Installation Methods

uvx Install MCP Bundle Docker Development

[!IMPORTANT] FAQ

Is this safe to use? Will I get banned? This tool controls a real browser session; it doesn't exploit undocumented APIs or bypass authentication. That said, LinkedIn's TOS prohibit automated tools. With normal usage (not bulk scraping!) you're not risking a ban. So far, no users have been banned for using this MCP. If you encounter any issues, let me know in the Discussions.

What if my agents execute too many actions? LinkedIn may send you a warning about automated tool usage. If that happens, reduce your automation volume. This MCP executes tool calls sequentially via a queue but has no built-in rate limits. Prompt your agents responsibly.

ToolDescriptionStatus
get_person_profileGet profile info with explicit section selection (experience, education, interests, honors, languages, certifications, skills, projects, contact_info, posts)working
connect_with_personSend a connection request or accept an incoming one, with optional note#407
get_sidebar_profilesExtract profile URLs from sidebar recommendation sections ("More profiles for you", "Explore premium profiles", "People you may know") on a profile pageworking
get_inboxList recent conversations from the LinkedIn messaging inboxworking
get_conversationRead a specific messaging conversation by username or thread IDworking
search_conversationsSearch messages by keywordworking
send_messageSend a message to a LinkedIn user (requires confirmation)working
get_company_profileExtract company information with explicit section selection (posts, jobs); about-section references may include a company_urn entry carrying the numeric id used by LinkedIn's people-search currentCompany URL facetworking
get_company_postsGet recent posts from a company's LinkedIn feedworking
search_jobsSearch for jobs with keywords and location filtersworking
search_peopleSearch for people by keywords, location, connection degree (1st/2nd/3rd), and current companyworking
get_job_detailsGet detailed information about a specific job postingworking
get_feedGet recent posts from the authenticated user's home feedworking
close_sessionClose browser session and clean up resourcesworking


🚀 uvx Setup (Recommended - Universal)

Prerequisites: Install uv.

Installation

Client Configuration

{
  "mcpServers": {
    "linkedin": {
      "command": "uvx",
      "args": ["linkedin-scraper-mcp@latest"],
      "env": { "UV_HTTP_TIMEOUT": "300" }
    }
  }
}

The @latest tag ensures you always run the newest version — uvx checks PyPI on each client launch and updates automatically. The server starts quickly, prepares the shared Patchright Chromium browser cache in the background under ~/.linkedin-mcp/patchright-browsers, and opens a LinkedIn login browser window on the first tool call that needs authentication.

[!NOTE] Early tool calls may return a setup/authentication-in-progress error until browser setup or login finishes. If you prefer to create a session explicitly, run uvx linkedin-scraper-mcp@latest --login.

uvx Setup Help

🔧 Configuration

Transport Modes:

  • Default (stdio): Standard communication for local MCP servers
  • Streamable HTTP: For web-based MCP server
  • If no transport is specified, the server defaults to stdio
  • An interactive terminal without explicit transport shows a chooser prompt

CLI Options:

  • --login - Open browser to log in and save persistent profile
  • --no-headless - Show browser window (useful for debugging scraping issues)
  • --log-level {DEBUG,INFO,WARNING,ERROR} - Set logging level (default: WARNING)
  • --transport {stdio,streamable-http} - Optional: force transport mode (default: stdio)
  • --host HOST - HTTP server host (default: 127.0.0.1)
  • --port PORT - HTTP server port (default: 8000)
  • --path PATH - HTTP server path (default: /mcp)
  • --logout - Clear stored LinkedIn browser profile
  • --timeout MS - Browser timeout for page operations in milliseconds (default: 5000)
  • --tool-timeout SECONDS - Per-tool MCP execution timeout in seconds (default: 180.0). Increase further for heavy scrapes / cold-start Chromium / slow networks.
  • --user-data-dir PATH - Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)
  • --chrome-path PATH - Path to Chrome/Chromium executable (for custom browser installations)

Basic Usage Examples:

# Run with debug logging
uvx linkedin-scraper-mcp@latest --log-level DEBUG

HTTP Mode Example (for web-based MCP clients):

uvx linkedin-scraper-mcp@latest --transport streamable-http --host 127.0.0.1 --port 8080 --path /mcp

Runtime server logs are emitted by FastMCP/Uvicorn.

Tool calls are serialized within a single server process to protect the shared LinkedIn browser session. Concurrent client requests queue instead of running in parallel. Use --log-level DEBUG to see scraper lock wait/acquire/release logs.

Test with mcp inspector:

  1. Install and run mcp inspector bunx @modelcontextprotocol/inspector
  2. Click pre-filled token url to open the inspector in your browser
  3. Select Streamable HTTP as Transport Type
  4. Set URL to http://localhost:8080/mcp
  5. Connect
  6. Test tools
❗ Troubleshooting

Installation issues:

  • Ensure you have uv installed: curl -LsSf https://astral.sh/uv/install.sh | sh
  • Check uv version: uv --version (should be 0.4.0 or higher)
  • On first run, uvx downloads all Python dependencies. On slow connections, uv's default 30s HTTP timeout may be too short. The recommended config above already sets UV_HTTP_TIMEOUT=300 (seconds) to avoid this.

Session issues:

  • Browser profile is stored at ~/.linkedin-mcp/profile/
  • Managed browser downloads are cached at ~/.linkedin-mcp/patchright-browsers/
  • Make sure you have only one active LinkedIn session at a time

Login issues:

  • LinkedIn may require a login confirmation in the LinkedIn mobile app for --login
  • You might get a captcha challenge if you logged in frequently. Run uvx linkedin-scraper-mcp@latest --login which opens a browser where you can solve it manually.

Timeout issues:

  • Page operations failing (elements not found, navigation hangs): increase the browser page-op timeout — --timeout 10000 or TIMEOUT=10000 (milliseconds, default 5000).
  • Entire tool calls timing out (e.g. multi-section profiles, cold-start Chromium, slow containers): increase the per-tool execution timeout — --tool-timeout 300 or TOOL_TIMEOUT=300 (seconds, default 180).
  • Users on slow connections may need higher values for either.

Custom Chrome path:

  • If Chrome is installed in a non-standard location, use --chrome-path /path/to/chrome
  • Can also set via environment variable: CHROME_PATH=/path/to/chrome


📦 Claude Desktop MCP Bundle (formerly DXT)

Prerequisites: Claude Desktop.

One-click installation for Claude Desktop users:

  1. Download the latest .mcpb artifact from releases
  2. Click the downloaded .mcpb file to install it into Claude Desktop
  3. Call any LinkedIn tool

On startup, the MCP Bundle starts preparing the shared Patchright Chromium browser cache in the background. If you call a tool too early, Claude will surface a setup-in-progress error. On the first tool call that needs authentication, the server opens a LinkedIn login browser window and asks you to retry after sign-in.

MCP Bundle Setup Help

❗ Troubleshooting

First-time setup behavior:

  • Claude Desktop starts the bundle immediately; browser setup continues in the background
  • If the Patchright Chromium browser is still downloading, retry the tool after a short wait
  • Managed browser downloads are shared under ~/.linkedin-mcp/patchright-browsers/

Login issues:

  • Make sure you have only one active LinkedIn session at a time
  • LinkedIn may require a login confirmation in the LinkedIn mobile app for --login
  • You might get a captcha challenge if you logged in frequently. Run uvx linkedin-scraper-mcp@latest --login which opens a browser where you can solve captchas manually. See the uvx setup for prerequisites.

Timeout issues:

  • Page operations failing (elements not found, navigation hangs): increase the browser page-op timeout — --timeout 10000 or TIMEOUT=10000 (milliseconds, default 5000).
  • Entire tool calls timing out (e.g. multi-section profiles, cold-start Chromium, slow containers): increase the per-tool execution timeout — --tool-timeout 300 or TOOL_TIMEOUT=300 (seconds, default 180).
  • Users on slow connections may need higher values for either.


🐳 Docker Setup

Prerequisites: Make sure you have Docker installed and running, and uv installed on the host for the one-time --login step.

Authentication

Docker runs headless (no browser window), so you need to create a browser profile locally first and mount it into the container.

Step 1: Create profile on the host (one-time setup)

uvx linkedin-scraper-mcp@latest --login

This opens a browser window where you log in manually (5 minute timeout for 2FA, captcha, etc.). The browser profile and cookies are saved under ~/.linkedin-mcp/. On startup, Docker derives a Linux browser profile from your host cookies and creates a fresh session each time. If you experience stability issues with Docker, consider using the uvx setup instead.

Step 2: Configure Claude Desktop with Docker

{
  "mcpServers": {
    "linkedin": {
      "command": "docker",
      "args": [
        "run", "--rm", "-i",
        "-v", "~/.linkedin-mcp:/home/pwuser/.linkedin-mcp",
        "stickerdaniel/linkedin-mcp-server:latest"
      ]
    }
  }
}

[!NOTE] Docker creates a fresh session on each startup. Sessions may expire over time — run uvx linkedin-scraper-mcp@latest --login again if you encounter authentication issues.

[!NOTE] Why can't I run --login in Docker? Docker containers don't have a display server. Create a profile on your host using the uvx setup and mount it into Docker.

Docker Setup Help

🔧 Configuration

Transport Modes:

  • Default (stdio): Standard communication for local MCP servers
  • Streamable HTTP: For a web-based MCP server
  • If no transport is specified, the server defaults to stdio
  • An interactive terminal without explicit transport shows a chooser prompt

CLI Options:

  • --log-level {DEBUG,INFO,WARNING,ERROR} - Set logging level (default: WARNING)
  • --transport {stdio,streamable-http} - Optional: force transport mode (default: stdio)
  • --host HOST - HTTP server host (default: 127.0.0.1)
  • --port PORT - HTTP server port (default: 8000)
  • --path PATH - HTTP server path (default: /mcp)
  • --logout - Clear all stored LinkedIn auth state, including source and derived runtime profiles
  • --timeout MS - Browser timeout for page operations in milliseconds (default: 5000)
  • --tool-timeout SECONDS - Per-tool MCP execution timeout in seconds (default: 180.0). Increase further for heavy scrapes / cold-start Chromium / slow networks.
  • --user-data-dir PATH - Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)
  • --chrome-path PATH - Path to Chrome/Chromium executable (rarely needed in Docker)

[!NOTE] --login and --no-headless are not available in Docker (no display server). Use the uvx setup to create profiles.

HTTP Mode Example (for web-based MCP clients):

docker run -it --rm \
  -v ~/.linkedin-mcp:/home/pwuser/.linkedin-mcp \
  -p 8080:8080 \
  stickerdaniel/linkedin-mcp-server:latest \
  --transport streamable-http --host 0.0.0.0 --port 8080 --path /mcp

Runtime server logs are emitted by FastMCP/Uvicorn.

Test with mcp inspector:

  1. Install and run mcp inspector bunx @modelcontextprotocol/inspector
  2. Click pre-filled token url to open the inspector in your browser
  3. Select Streamable HTTP as Transport Type
  4. Set URL to http://localhost:8080/mcp
  5. Connect
  6. Test tools
❗ Troubleshooting

Docker issues:

  • Make sure Docker is installed
  • Check if Docker is running: docker ps

Login issues:

  • Make sure you have only one active LinkedIn session at a time
  • LinkedIn may require a login confirmation in the LinkedIn mobile app for --login
  • You might get a captcha challenge if you logged in frequently. Run uvx linkedin-scraper-mcp@latest --login which opens a browser where you can solve captchas manually. See the uvx setup for prerequisites.
  • If Docker auth becomes stale after you re-login on the host, restart Docker once so it can fresh-bridge from the new source session generation.

Timeout issues:

  • Page operations failing (elements not found, navigation hangs): increase the browser page-op timeout — --timeout 10000 or TIMEOUT=10000 (milliseconds, default 5000).
  • Entire tool calls timing out (e.g. multi-section profiles, cold-start Chromium, slow containers): increase the per-tool execution timeout — --tool-timeout 300 or TOOL_TIMEOUT=300 (seconds, default 180).
  • Users on slow connections may need higher values for either.

Custom Chrome path:

  • If Chrome is installed in a non-standard location, use --chrome-path /path/to/chrome
  • Can also set via environment variable: CHROME_PATH=/path/to/chrome


🐍 Local Setup (Develop & Contribute)

Contributions are welcome! See CONTRIBUTING.md for architecture guidelines and checklists. Please open an issue first to discuss the feature or bug fix before submitting a PR.

Prerequisites: Git and uv installed

Installation

# 1. Clone repository
git clone https://github.com/stickerdaniel/linkedin-mcp-server
cd linkedin-mcp-server

# 2. Install UV package manager (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh

# 3. Install dependencies
uv sync
uv sync --group dev

# 4. Install pre-commit hooks
uv run pre-commit install

# 5. Start the server
uv run -m linkedin_mcp_server

The local server uses the same managed-runtime flow as MCPB and uvx: it prepares the Patchright Chromium browser cache in the background and opens LinkedIn login on the first auth-requiring tool call. You can still run uv run -m linkedin_mcp_server --login when you want to create the session explicitly.

Local Setup Help

🔧 Configuration

CLI Options:

  • --login - Open browser to log in and save persistent profile
  • --no-headless - Show browser window (useful for debugging scraping issues)
  • --log-level {DEBUG,INFO,WARNING,ERROR} - Set logging level (default: WARNING)
  • --transport {stdio,streamable-http} - Optional: force transport mode (default: stdio)
  • --host HOST - HTTP server host (default: 127.0.0.1)
  • --port PORT - HTTP server port (default: 8000)
  • --path PATH - HTTP server path (default: /mcp)
  • --logout - Clear stored LinkedIn browser profile
  • --timeout MS - Browser timeout for page operations in milliseconds (default: 5000)
  • --tool-timeout SECONDS - Per-tool MCP execution timeout in seconds (default: 180.0). Increase further for heavy scrapes / cold-start Chromium / slow networks.
  • --status - Check if current session is valid and exit
  • --user-data-dir PATH - Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)
  • --slow-mo MS - Delay between browser actions in milliseconds (default: 0, useful for debugging)
  • --user-agent STRING - Custom browser user agent
  • --viewport WxH - Browser viewport size (default: 1280x720)
  • --chrome-path PATH - Path to Chrome/Chromium executable (for custom browser installations)
  • --help - Show help

Note: Most CLI options have environment variable equivalents. See .env.example for details.

HTTP Mode Example (for web-based MCP clients):

uv run -m linkedin_mcp_server --transport streamable-http --host 127.0.0.1 --port 8000 --path /mcp

Claude Desktop:

{
  "mcpServers": {
    "linkedin": {
      "command": "uv",
      "args": ["--directory", "/path/to/linkedin-mcp-server", "run", "-m", "linkedin_mcp_server"]
    }
  }
}

stdio is used by default for this config.

❗ Troubleshooting

Login issues:

  • Make sure you have only one active LinkedIn session at a time
  • LinkedIn may require a login confirmation in the LinkedIn mobile app for --login
  • You might get a captcha challenge if you logged in frequently. The --login command opens a browser where you can solve it manually.

Scraping issues:

  • Use --no-headless to see browser actions and debug scraping problems
  • Add --log-level DEBUG to see more detailed logging

Session issues:

  • Browser profile is stored at ~/.linkedin-mcp/profile/
  • Use --logout to clear the profile and start fresh

Python/Patchright issues:

  • Check Python version: python --version (should be 3.12+)
  • Reinstall Patchright: uv run patchright install chromium
  • Reinstall dependencies: uv sync --reinstall

Timeout issues:

  • Page operations failing (elements not found, navigation hangs): increase the browser page-op timeout — --timeout 10000 or TIMEOUT=10000 (milliseconds, default 5000).
  • Entire tool calls timing out (e.g. multi-section profiles, cold-start Chromium, slow containers): increase the per-tool execution timeout — --tool-timeout 300 or TOOL_TIMEOUT=300 (seconds, default 180).
  • Users on slow connections may need higher values for either.

Custom Chrome path:

  • If Chrome is installed in a non-standard location, use --chrome-path /path/to/chrome
  • Can also set via environment variable: CHROME_PATH=/path/to/chrome


Acknowledgements

Built with FastMCP and Patchright.

Use in accordance with LinkedIn's Terms of Service. Web scraping may violate LinkedIn's terms. This tool is for personal use only.

License

This project is licensed under the Apache 2.0 license.