A year of Continuous Research in my first year as a Product Manager

Ima-abasi Itama
6 min readApr 10, 2023

--

This last quarter marked my first year as a product person, working as a full time Product Manager. I began my product journey as a UX researcher, then took up a full time role as a junior Product Manager.

Being in my first full-time start up role was a learning experience for me. I gained a deeper understanding of continuous discovery or continuous research, and the weight of being responsible for UX research as a PM. One expectation I had about research when I started this role was that it would always be conducted with a lot of rigor, with careful planning, documentation, and presentation. I also thought that there would always be time for research. Don’t get me wrong, research should be intentional, planned properly, to include all elements that ensure we yield good insights.

In this article, I’d be walking through the product development stages where I needed to move from a point of unknown to known(s) or more unknowns, through different methods of UX research. Ima the explorer. So this will look like what type of research I did at various stages. Also, I won’t be talking about a single project or problem as the baseline for all the stages in the product development cycle. Would emphasize on the different stages and the research questions I sought to answer in those stages, the methods, and learnings. artefacts will be concealed from this article in order not to divulge into any confidential information.

Product description: a financial technology software for large scale exporters in Africa to tap into global markets via proprietary trade data, flexible financing and treasury solutions.

Customer & User description: large scale exporters in Africa.

Problem discovery — Identifying pain-points

It all begins here and I call this the phase in the lifecycle where I leveraged different generative or discovery User research methods. This is an interesting phase in the lifecycle because when attempting to understand or discover problems and opportunities, one can mix product discovery methods, the important thing here is asking/crafting the right questions i.e defining what needs to be known, and letting that define the methods that can best answer that question or unknown.

The context for the research at this phase at the time were internal products and tools leveraged in assessing loan applications and disbursing as well as the end user’s experience in applying for a loan.

Research questions

  1. What specific touchpoints in our internal operations for loan assessment can be optimized to enhance user experience on internal workflow?
  2. What are the potential financing use cases or scenarios that are important to users, but have not been addressed by our current offerings?

Research Methods I employed to answer these questions — Service Mapping, Focus group study, one on one stakeholder interviews and customer intro calls.

Participants — Internal and external stakeholders.

Service mapping

To better understand the supporting components, touchpoints and actions involved in the lending journey, I found a service mapping activity to be ideal. As I only had a thorough understanding of the end user journey, this activity helped me visualize the internal touchpoints. As a result, this brought to the surface, opportunities for optimization and inspired questions on my discussion guide for interviewing process owners/actors across the service.

Continuing with a focused study (which is usually not the best method but has contextual advantages), we identified inefficiencies in both front stage and backstage actions that impact the loan application and disbursement processes.

In addition, these stakeholders speak with customers and potential customers every other day so it was a good opportunity to bring everyone together to review their interactions with customers, share our findings, and prioritize needs to dig deeper on for the next quarter.

Stakeholder interviews

From the list of pain points, and hypotheses, I followed up with one on one stakeholder interviews.

A few questions asked during these interviews include;

  1. When and where will users interact with this new solution?
  2. how does this interaction help the user (and the organization) to be successful?
  3. Which documents are critical to users’ successful application journey and your assessment on this new feature?
  4. What are the events leading up to the user requiring this?

Customer intro calls as an observer

Things I looked forward to learning by just observing:

  1. Language used to describe their needs and goals.
  2. How they expect to interact with a solution.
  3. How they address their pain-points and needs

Solution Generation — Informing our Information Architecture

As a product team, we were designing and developing various features from different product categories across the platform and we needed to understand how our users thought about our categorization of information and families of coming solutions. thus we conducted a card sort study to figure out the best approach to designing our navigation menu.

We conducted with 10 external participants from the financial industry. 6 moderated sessions and 4 unmoderated.

Conducting the study

We had a total of 17 cards to be grouped. on these cards were titles of features/sub-features and their descriptions.

For the grouping, we used both open card sorting and closed card sorting where we predefined categories which participants would drag and drop into (closed), while also letting them know they can create new groupings or rename an already existing group (open).

Observations and learnings;

  1. During the study, participants spent more time reading feature descriptions and asking questions than we expected. We took this as a signal that the naming was too complex and an opportunity to further explore participants’ understanding and assumptions about the features.
  2. We validated assumptions about categorization of features.
  3. During analysis, we arrived at renaming two features on the navigation menu as informed by discussion with participants.

Solution validation — Digitizing loan application

For months, lending customers applied for financing through manual processes. Among the other efforts to scale and reduce manual processes in applying for a loan and liaison with lenders, we designed the application flow that will eventually integrate with loan management and payment processes.

Because the loans were specialized, we tailored the loan application questions and assessment requirements to fit each loan’s specific needs. However, to make sure that the terms and language used were understandable to the target audience and didn’t rely too heavily on industry jargon, we conducted a content test using a task-based usability test.

Participants:

Potential customers across the 3 different types of loans offered.

Questions we asked after the think aloud test:

  • What was easy to understand in the information, and what was more difficult — can you explain why?
  • We noticed you read that question over and over, how did you interpret that question?
  • What does this mean to you?
  • If you could change anything about the questions on this page, what would you change?
  • Were there any words throughout the information that you were unsure of?

As expected,

  • some loan application questions were further broken down into separate questions
  • some were rephrased to reduce jargon.

Post-launch

Yay! features are finally on the live application. Well not so fast.

We had our simple research goals defined:

  1. Get a full scale understanding of how a user would interact with the onboarding flow, the loan applications, separate from how we think it should be interacted with.
  2. Identify where bugs and complicated user experiences arise.

Intended impact

  1. Identify opportunities to improve the User Experience.
  2. Scope iteration plan for next development sprint

Method

  • Unmoderated usability tests with both internal and external participants.

Conducting:

Participants had three sequential tasks to complete on the platform — Onboard, complete KYC Verification, and apply for a loan.

Learnings

  1. Observing via Hotjar recordings and heatmaps revealed more usability issues than reported in the feedback form.
  2. Identified more technology needs of the user persona and opportunities to further tailor the user experience.
  3. We experienced technical defects much more than expected which led to working with just internal (non-product team) participants as this interrupted external participants availability.
  4. Usability issues and bugs found at this testing stage usually require immediate action, so most findings were immediately escalated to reduce time to deployment. And proper documentation for tracking and referencing usually came after.

That’s a wrap! Second year running and there’s a lot of learning experiences I’m looking forward to. I want to better leverage secondary and market research, build out research panels for different product lines, and level up my product analytics skills.

If you have any tips to share with me on my plans for this second year or improvements on any of my approaches shared, please share in the comments. Thank you!

--

--

Ima-abasi Itama
Ima-abasi Itama

Written by Ima-abasi Itama

Experience Design & Research Enthusiast.

No responses yet