Thank you for asking. I wanted to think these through properly before sending.
I have been away from APAC long enough that some details may be out of date — but the underlying dynamics (ramp speed, review load, repeated errors, retention) tend to persist even as tools change.
Everything here comes back to one point: identify people earlier and manage them accordingly. Most of the issues below follow from that.
— Mia
Overall View
Identify new analysts quickly and move each person into the role where they are most likely to produce reliable output.
Within the first 30–50 days, decide who can grow into a broader analyst role, who is better suited to stable execution, and who is unlikely to ramp fast enough. Onboarding, review, and AI support should then help each person perform that role more reliably. When this happens consistently, ramp-up gets faster, reviewer time is better spent, and the team becomes easier to run.
Section 1
Sorting the Pipeline Earlier
New analysts join with different goals. Some want to grow fast and take on more scope; others want reliable, balanced work and clean output targets. These are also two different kinds of value: one group builds the foundations that improve how the whole team works over time — knowledge systems, better processes, future reviewers. The other sustains the output the team runs on day to day, hitting KPIs and keeping production stable. Managing both groups the same way wastes reviewer time and blurs expectations. The earlier the team distinguishes between them, the faster each person can be managed toward the kind of output the team actually needs.
1
Two Tracks, Not One Pipeline
Read working style early, then manage accordingly.
+
Recommendation
By day 30–50, place each analyst on a growth or execution track based on working style and early behavioral signals — not just output volume. Align scope, review time, and expectations to that track from that point forward.
Reading the signals
Early signal (days 1–30)
Track
Management move
Seeks more scope; asks why, not just how; engages with feedback beyond the correction itself
Growth
Give scope early — side project, peer exposure, development conversations
Delivers consistently; values clear scope and stability; not pushing for extra tasks
Execution
Set KPI targets, coach for efficiency and accuracy, keep scope stable
Output below baseline by week 4–6 with no clear improvement signal
Fit decision
Decide at week 6 — not week 12
The sort is criteria-based, not impression-based. These signals often show up earlier than stable performance patterns.
Implementation Notes
Growth track: people who signal they want to grow should get that opportunity early — don't make them wait until they have proven output.
Execution track: people who want stable, quality work should get clear KPI targets and help getting faster and more accurate — not a development plan they didn't ask for.
Fit decision: if output is below baseline and the picture is still unclear by week 4–6, make the call rather than letting the ramp extend. Delayed decisions cost more reviewer time than early ones.
The track is a starting orientation, not a fixed label — it can shift in either direction as the picture gets clearer. Sorting early helps the team allocate time and expectations more cleanly from the start.
2
Two Paths, Two Definitions of Success
What "doing well" looks like is different for each track.
+
Recommendation
Growth-track success is about building things the team keeps using — not just personal profile count. Execution-track success is volume × quality, with a recognition system that makes output visible and meaningful. Both need explicit, measurable definitions from day one, but the definition of success should be different for each.
Growth track — milestones by checkpoint
By
Output
Signal
Day 30
Complete first 10+ profiles; start logging recurring errors and cross-profile industry patterns
Are they noticing commonalities, not just fixing individual mistakes?
Day 60
Synthesize patterns into 1 Notion doc; propose at least 1 update to an existing SOP or AI prompt
Are they building something others can use, or just summarizing their own work?
Day 90
Deliver 1 team-useful output: a sector reference framework, a reusable prompt template, or a knowledge base update
Would this output still be useful after they leave?
Execution track — metrics by checkpoint
Metric
Day 30
Day 60
Day 90
Profiles / week Volume
□
□
□
First-pass approval rate % approved without major revision
□
□
□
Avg. reviewer time / profile Declining = submissions getting cleaner
□
□
□
Repeat error rate % of previously flagged errors reappearing — should trend to zero
□
□
□
Recognition: a visible scoring system — bi-weekly or monthly — that makes volume × quality tangible. High performers see their own progress clearly and find achievement in it. Ideally that connects, over time, to something more tangible — title or compensation.
Implementation Notes
Growth-track reward = ownership. Give them a real deliverable, not a task. The output should be something the team will still reference after they move on. Promotion conversations should be tied to whether these outputs actually happened.
Execution-track reward = visibility and progression. A scoring system — bi-weekly or monthly — that makes output tangible gives high performers a real sense of achievement. Ideally, sustained performance at the top of the range connects to something concrete — title or compensation — not just positive feedback.
Neither track should be managed by time on team. Growth is measured by what was built; execution is measured by what was produced. Both make the team better — just in different ways.
Section 2
Shortening the Path to Reliable Output
The team already has an onboarding page for every analyst. Extending that same structure to day 30, 60, and 90 turns a one-time setup document into a running record of each person's progression. Done well, it serves three things at once: the analyst's own reflection, the team's knowledge base, and the mentor's ability to set the right direction at each stage — so each person ramps faster, and the knowledge doesn't leave with them.
1
Extend the Onboarding Page Through Day 90
Same Notion structure, same cadence — just keep it running past day one.
+
Recommendation
At day 30, 60, and 90, each analyst updates their page with a consistent set of fields — focused on the craft of profiling itself: errors flagged in review, what they learned, and which tips or tools actually helped. Writing this down forces clearer thinking about the work. And over time, these pages become a team asset — the accumulated knowledge of every cohort, available to the next one from day one.
What the analyst writes at each checkpoint
Field
What to capture
Profiles this period
Which ones specifically; first-pass rate; avg. time
Review errors received
What was flagged, on which profiles, what the correction was
New things learned
Profiling techniques, sourcing approaches, industry patterns noticed
Tips that worked
Shortcuts, better approaches, AI prompts that genuinely saved time
Still unclear
Open questions, edge cases not yet resolved
The template is the same for everyone. The point is not to explain — it is to log specifically: which error, which profile, which tip, which tool. That specificity is what makes it useful to the analyst and to the team.
Implementation Notes
After each checkpoint, the best entries — common errors, effective prompts, profiling tips — get promoted into the APAC Notion library. Each new cohort starts with a better baseline than the one before.
The AI tips section is worth tracking specifically: which NotebookLM flows or prompts actually helped, and which turned out to be noise. Over time this builds an honest picture of what the tools are doing for the team.
2
Use the Pages to Plan What Comes Next
The checkpoint record is the evidence base for Section 1's track decisions — not a separate exercise.
+
Recommendation
After each checkpoint, the mentor reads the page and uses it to set the next period's plan with specificity. The analyst's errors and learnings are the input; the plan for the next 30 days is the output. This is what connects the checkpoint system back to the track decisions in Section 1 — giving them a paper trail and a regular cadence.
From checkpoint to next plan
What the page shows
Track
What gets planned next
Errors reducing; tips and AI workflows documented
Growth
Expand scope: first side project, first peer review task
Output stable; error types narrowing
Execution
Set Day 31–60 KPI targets based on actual observed pace — not a generic number
By Day 90: tips documented, AI workflows repeatable
Growth
Define what broader scope looks like post-90 — coverage theme, team project, or reviewing peers
The checkpoint is not just a record of the past — it is the input to the plan for what comes next. Without that step, it is just documentation.
Implementation Notes
For execution-track analysts, Day 30 data sets the KPI baseline — volume and quality targets calibrated to what was actually observed, not assumed. This is what makes the targets feel grounded and the recognition meaningful when they are hit.
For growth-track analysts, the 90-day record is the evidence for the scope conversation — what they built, what they figured out, what they are ready to take on. Promotion or scope expansion should be tied to what is in the record, not to how long they have been on the team.
Section 3
Turning Review into Skill Transfer
Most review time goes to the same categories of errors, appearing again and again across different analysts and profiles. A pre-review step raises the quality of what gets submitted. A closing loop ensures each correction is carried forward, not just applied once. Together, they reduce review load over time rather than keeping it constant.
1
Introduce a Pre-Review Step
Use a lighter pre-review flow for execution-track analysts and a more reflective one for growth-track analysts.
+
Recommendation
Before submitting, execution-track analysts do a light self-check, while growth-track analysts add a fuller self-review that flags confidence, uncertainty, and open questions. The reviewer then knows where to focus and how much coaching to give.
Pre-review scorecard (growth-track version)
Dimension
Self
Reviewer
Research depth Depth and quality of analysis — how far the analyst went beyond surface-level data
6/10
—
Small mistakes Minor errors and unnoticed gaps — anything that should have been caught before submission
8/10
—
Platform consistency Correct use of Gain conventions — rounds, stages, tags, and deal classifications
6/10
—
Sourcing ability Confidence in finding and using the right sources — prospectus, filings, articles
9/10
—
Flag: Latest round tagged as Series C — one source suggests Series B+; Gain stage label needs confirming before submission
The reviewer fills in their column after review. Execution-track analysts can use a lighter version that only flags key uncertainties or source gaps before submission.
Implementation Notes
Growth-track submissions include a completed scorecard and short note on where the analyst is least confident.
Execution-track submissions use a lighter check: flag the main uncertainty or missing source before submission, without adding a full scoring step.
At the bi-weekly check, compare self-scores and reviewer scores over time for growth-track analysts. When they start to converge, the analyst is building independent judgment — and moving toward being able to review others.
2
Close the Loop After Review
The same loop for everyone — but the reviewer's investment differs by track.
+
Recommendation
After every review, the analyst logs what was flagged and what to check next time. The loop is the same for everyone. What changes by track is how the reviewer uses their time: lighter notes for execution analysts so profiles move faster and KPIs get hit; more detailed notes for growth analysts so they develop the judgment to eventually review others. Over time, both reduce reviewer load — just through different paths.
Review investment by track
Execution track
Growth track
Reviewer notes
Light — flag the issue, mark what to fix
Detailed — explain the reasoning behind the correction
Analyst action
Fix, log 1 row, submit next profile
Read carefully, log with context, apply to next profile
The payoff
Cleaner submissions over time → reviewer spends 20–30% less time per profile → more profiles completed, KPIs hit
Analyst builds judgment faster → can start reviewing peers → reviewer load drops at the team level
Bi-weekly: manager scans error logs by person. Are the same errors repeating — or not?
Implementation Notes
For execution: the goal is throughput. Reviewer time per profile should trend down as submissions get cleaner. The error log makes that trend visible — and the recognition system makes it meaningful to the analyst.
For growth: the reviewer's detailed notes are an investment. The analyst who reads them carefully is the one who develops the judgment to eventually sit on the reviewer side — which is the only way to scale review capacity on the team.
Repeated error patterns across both tracks can feed into the APAC Notion library and, over time, into AI prompt templates — so the whole team gets better, not just individuals.
Closing
All three sections point to the same goal: getting new analysts to stable, high-quality output as fast as possible. The earlier someone is on the right track, the sooner they are producing what the team needs from them. The checkpoint system keeps that ramp visible and on course. The pre-review step raises the quality of every submission before it reaches a reviewer. When this works, analysts produce more reliably — and associates spend less time on review and more time on everything else. That compounds with every cohort.
Notebook LM wasn't part of the workflow when I was on the team, but I've since spent time looking at how AI and knowledge-base systems are being used in practice across different companies. I would be happy to share more about AI or related tools in the future, if useful.
I hope some of this is helpful, and I really hope the team keeps getting better.