AJ Davis | Experiment Zone
How did one company increase conversation rates by 100% in just 4 weeks?
Experiment Zone founder AJ Davis was the head researcher at Google Optimize, where she had the opportunity to work closely with the team working on conversion rate optimization. It was this experience that led her to create Experiment Zone, where she and her team marry those two disciplines — user research and CRO — to deliver major ROI to their clients.
In this episode, she shares the process they use and breaks down the detailed steps that anyone can follow to get better marketing results, from conducting usability studies (including the the audience size you need, how to structure questions and find audiences, and what tools to use) to how to analyze the results, conduct experiments, and identify opportunities for using optimization to improve conversion rates.
Get the details on all of this, and more, in this week’s episode.
Resources from this episode:
Connect with AJ on LinkedIn
Visit the Experiment Zone website
Kathleen (00:17):
Welcome back to the inbound success podcast. I'm your host Kathleen Booth. And this week, my guest on the podcast is AJ Davis, who is the founder of Experiment Zone. Welcome to the podcast, AJ.
AJ (00:30):
Thanks for having me on.
Kathleen (00:32):
Uh, I'm excited to talk to you because you are an expert in conversion optimization, which I always think is, is fascinating to talk about. Um, and you also are know a lot about user research and kind of the intersection of those two is where we're gonna focus. Um, before we jump into that, can you, uh, maybe share with my listeners a little bit about yourself and your story and kind of how you wound up doing what you're doing now and what Experiment Zone is?
AJ (00:59):
Sure. Yeah, lots of tackle there. Um, I started my career as a user experience researcher, so I was on the product development side and my job was to understand user needs, user pain points with user interfaces, uh, talked to thousands of people, watched how they used interfaces, um, reported that back to product teams to make prioritization decisions and new feature decisions. Um, I then worked on a product kind of by chance, uh, which turned into Google optimize when I was at Google and I was the first researcher head researcher on the product for several years before it launched. So I had the chance to talk to and learn from a bunch of people doing CRO. And usually as a researcher, you're sitting on the other side, just taking it in, you know, digesting the information, passing it back to the teams you're working with. And I found myself getting in my head and going, Hmm, that's really interesting. Oh, Hmm. I wish I could do that with the types of things we're learning. And what I really appealed to me about CRO is that you can take user research insights that will teach you about the problems and the needs people have. And then you can use CRO to see if you're really solving those problems in the real world. So long story short, I left Google. I worked another agency for a year to really make sure I understood CRO and could execute on it and then started Experiment Zone after that.
Kathleen (02:21):
And when did you start the company?
AJ (02:23):
It was five years ago.
Kathleen (02:25):
Congratulations. You made it past, like, what is it? They say most startups don't make it past two or three years. <laugh> I think
AJ (02:30):
That's the threshold. Yeah. Not
Kathleen (02:31):
The zone of death,
AJ (02:33):
2020 being in the middle of all that.
Kathleen (02:35):
Oh my gosh. Yeah. Interesting times. Right. Mm-hmm <affirmative> that's exciting. Um, so I wanna start with the user research angle cuz you describe yourself as a researcher. So let's actually start there. And if, if somebody's interested in improving, you know, conversions on their website, that seems like that's the first step. So how do you tackle that process?
AJ (02:58):
Mm-hmm <affirmative> I think a lot of times people think they kind of shortcut the cut, the research part of it short in marketing. Um, there's a lot of reasons for that. And if we have ideas for solutions, we can go test them in the real world, ensure that works as a starting place. But if we wanna really know what our customers need and really know, what's keeping them from being a visitor, to becoming a customer research is the best place to do it. And so there's a lot of different methods and I love to start with what the real question is that we're trying to answer before deciding what methods most appropriate. Um, so let's say in an eCommerce context, you wanna understand why you're getting drop off from your product page through checkout. What we would wanna do a usability study, because what we'd wanna do is watch people use that part of the site and we take them through the whole journey, but we'd focus on that part and have them think aloud and share things. They're confused about what they're noticing, what they expect to happen versus what really happens. And through all that process and doing that with multiple participants, you can really see the main pain points people are having,
Kathleen (04:01):
How much data do you need. So if you're gonna, you're talking about observing people going through this experience, you know, is it 10 people? Is it a hundred people? Is it a thousand? Like what's the volume there that will give you enough information to make it statistically significant? Or does that even matter? Maybe, maybe you don't need statistics significance.
AJ (04:20):
There's great debate on that number. And that question, I think that in practice, people really look at five people per group or per cohort. So you might have five new visitors, five returning visitors or current customers, prospects. Um, you can segment a couple different ways, but you really want five to get about 80% of the problems. So usability studies aren't gonna show us every single problem that exists or every single confusion point, cuz there's lots of edge cases and things that we might not see in that flow, but it gets us the biggest ones. And so if the five people go through and don't see it, it's not likely to be as big of a problem if all five people see
Kathleen (04:58):
It. And are you doing that? Like five people on desktop, five people on mobile, five people on tablet, um, you know, and like, do you have to do that for every iteration and like different browser type and how, how much is enough
AJ (05:11):
You could, but I would say if you haven't done user research before I would look at how much traffic you're getting, if it's mostly on mobile, start with five people on mobile and you're gonna learn a lot more. Um, I think what's really powerful about research plus zero is that it's not, it just doesn't just stop after you do the usability study. So if you're doing product development, you're building something brand new and you need a lot of confidence that what you're building is right. You wanna have a lot more people in it. But if your goal is to say, what are some problems I don't see in the experience that I can uncover by bringing in some fresh perspectives by people who represent our target customers, five people on your single device will get you a ton of information rather than waiting six months to do a full blown study with every possible combination.
Kathleen (05:55):
And then I imagine, um, how you conduct the research with each of those five people is equally as important as the fact that you've done it. Meaning like you're not leading the witness if you will. And uh, and you're comparing apples, apples in terms of experiences. So I, I, and it, I'm just guessing some of this has to do with like technologically, like, are there tools you use to conduct these studies and are there like, how do you create question sets or, or guided journeys that, that yield the information you're looking for? So I don't even know where to start with that question. I feel like I just asked like 10 questions in one. So maybe you could just break down when, you know, you need to do this and you need five people to go through it. What are the steps you go through? And what are the tools you use to build a really good defensible process?
AJ (06:43):
And the most important thing is the planning part of this. So what we don't wanna do is show up and just ask people to look at something and hope that they all go down the same path. We wanna create a plan. So in the research world, we like create a research plan with a question and then we create screening criteria so that it represents the right people, that when we go to find them, that they match, uh, the type of users we're looking for, and then we wanna create a scenario. So we are taking them through something we're setting the stage. So it's not just, what do you think in this moment? But let's say you're shopping for a mother's day present. You come across the site because a friend mentions it and you would like to pick out two items for mother's day or your budget's $50 or something where you're setting the stage so that every single one of those participants is coming in with that same setup.
And then you take them through a script. So the script is where it kind of depends on what tool you're using for how strict it's gonna be. If you're doing, um, unmoderated research. And I'll tell some tools about that, uh, you're gonna have a single script that they're gonna read and you need to set it up so that they can get through it without your help. So it needs to be able to be very specific and it needs to say, here's the single question. Once you've finished answering this question, go to the next question and that voiceover or that intuition you might have, you're sitting next to someone and they're nodding along and saying, okay, and now we're doing this in an unmoderated setting. You have to hand that to the participant in a moderated setting. We typically also follow a script, but if the customer or the visitor is getting the participant in the scenario is getting kind of tripped up, then the, the moderator can help them and say, okay, great. Uh, today that's not how that's set up. Let's move on to this next thing.
Kathleen (08:26):
And moderated is that like you're sitting next to them in a room or is that you're on a virtual call with them? What is that?
AJ (08:33):
You can do both. Yeah, you can do both. So in my career, almost all the research I did early on was in a lab. It was two rooms side by side with a one way mirror in between. And there was a team of people behind the mirror watching what was going on, like being in
Kathleen (08:47):
The police station, getting, yeah,
AJ (08:48):
It's exactly like that. Um, and then the researcher would sit next to the participant and we would do all these things very intentionally to not distract them or let them know like lead them into thinking, you know, sometimes it was, uh, I wouldn't have a habit of just taking notes on everything they said, cause I didn't want them to know which things were noteworthy, which
Kathleen (09:06):
I like who all of a sudden they're writing something down. <laugh>
AJ (09:08):
Exactly. So my trick was to just press the same key, like I'd go back and forth between two keys, if nothing interesting was happening. That's that's so funny. Not have to read it later. Um, but that of course has evolved and with, you know, COVID and remote work and just like the it's even better to have access to people for research that you couldn't get to come to a lab during a Workday. So you can have even better participants by doing remote re research. So whether it's Zoom, it used to be Go To Meeting, couple different platforms like that, where you can do just what we're doing now, having a conversation remotely. Um, and then the unmoderated tools, usertesting.com is the, like the big tool in the industry. Um, those are more enterprise solution and pretty much you buy it and everyone in your organization can use it. Uh, there's some other tools like trymyui.com, which is similar, but, uh, you can pay as you go, uh, Usability Hub's a great one as well. And then you can get participants respondent.io and userinterviews.com. Those are some great tools.
Kathleen (10:11):
I know you mentioned some of those are more enterprise. Are any of them really suitable or accessible for smaller businesses that may just have like one time needs?
AJ (10:20):
I would say the other ones. So usertesting.com used to be something where you could buy a single set of.
Kathleen (10:26):
Yeah. Cause I actually remember using that when I own my agency and it, and we were small, so I, at one point it definitely more so have they changed the pricing structure?
AJ (10:34):
They have. Yeah. So it's more like buy a subscription for it. Okay. But got it. You can, there are other tools now. So all the other ones I mentioned, you can just pay by the project. Okay. So they make it easy for you to access at all levels.
Kathleen (10:45):
So, okay. So you, you set out your experiment, you kind of figure out your, I love the scene setting, you know, like you are, you've been given $50 to buy your mother's day present. Um, and then you walk them through the experiment. Um, how do you define, how do you decide who should go through the testing? Because like when, when I used user testing, they would just find people and I could give them loose demographic kind of requirements. Although I was never certain that they really met them back then. Mm-hmm <affirmative>, I'm sure it has evolved a lot. So like for you, what's the process you go through to find testers.
AJ (11:22):
Yeah. So it's similar to that. So you can set like things like parameters around income, employment, status, age, gender, things like that. You also wanna set screener questions that correspond with like who, who you're looking for. So if you're an e-commerce site again, and you wanna talk to your current, people's purchased from you in the last six months and you wanna talk to people who've never purchased from you. You could use a single question. Uh, actually you would need two questions. You would need one question in the last six months, which of these brands have you purchased? And then which of these brands have you ever purchased from? And the way I like to catch those people who might be just filling out the screener in hopes that they get into everything, which isn't something that comes up as often as I had expected when we moved to more remote research, um, is that I would include a screener question of a brand that's fake. So I would make up like one or two brands. And if they click on that one, I exclude them.
Kathleen (12:18):
That's so smart. Oh my gosh, I love this <laugh>
AJ (12:21):
And then the other tip is to, uh, make sure their screener question isn't leading them. I see a lot of screener questions say, um, have you bought golf equipment in the last six months? Yes or no? Well, I think I know what they want me to say. Yeah, well I will probably press yes. Even if it was seven months, but if it's like, you know, tell me about the most recent time for each of these things, fill in a number for each one or click the ones within the last six months or what have you, you can structure it so that it's not leading them to the specific answer that
Kathleen (12:50):
You have to make. It seem kind of random. Like this is a random group of six brands, or this is a random group of six things you might have done that actually now all makes so much more sense to me having been the person who has gone through some testing things. Now I understand why things have been phrased the way they have. So that's, that's fascinating. <laugh> okay. So you, you set these experiments up, you, there are all these tools that you can use or, or, or in person methods to use to walk somebody through these tests, you get the results back and then what,
AJ (13:23):
Then you do the hard work, uh, analyzing it, right? So occasionally we'll have things where anybody can watch the, the videos or replay what happened. And we all can align immediately on what the one main pain point is. But most of the time you have to then break it down and you have to take apart what happened in each part, how severe were each of those parts. Um, it tends to be a team event where we're taking both clips of videos, rewatching them, taking notes on what we think was the pain point or not. So there's a whole bunch of different techniques and research for getting to that final analysis. But then your goal is to say not what were all the problems, but what were the biggest problems? And so that's where the sort of the researcher finesse comes in, because what you don't wanna do is watch a single person go through your website and then change everything that they had any problems with, because you may see someone else with a different perspective. So you wanna make sure you look at all the data and see the frequencies, how those things might interplay and then really understand severity of how big the problems are.
Kathleen (14:27):
Um, and then, and then you get to the point where you're going to make your changes and do the experiment mm-hmm <affirmative>. So like you you've analyzed the data, you've come up with your, let's call it top five, top 10 problems, and you have them prioritized in order. Um, we, then we come to the stage of the actual, like testing mm-hmm <affirmative>, um, and I've, I've talked to different people who have different philosophies about like how to, how to run really good tests and like how many variables and what variables you should change, et cetera. I'd love to hear your perspective on this. So how do you run a good controlled test?
AJ (15:03):
Yeah. And I think this is a, really, this is another one of those philosophical, like if we were in academia, we might do it differently than what we wanna do in the real world where we wanna maximize the ROI we can get from the program and not just have a perfect answer to how people respond. So what I like to think about is in our user study is we're gonna know the problems. We still don't know the solutions. We just know the pain points and there might be like five different ways to solve the same problem. They couldn't find the button. Should it be moved up? Should it be bigger? Should it be placed somewhere else on the page? Is it the wrong page altogether? And all of those can lead to solutions that you might want to AB test. So we like to think about, uh, testing as a theme.
So if all the things, all the changes take place on the same page and they're related to the same problem that we're trying to solve, we're more likely to group them together. We'll make the button bigger. We'll maybe change the color, we'll change the words on it and maybe have a couple variants within that. So the answer is what's, where should the add to cart B button be for this particular page? Um, some people are pure than that. And if you have tons of traffic, if you're Amazon, you're gonna test all those variable variables separately. And then other people will say, why not test the whole page? But in that scenario, if you don't know what variable or what type of variable changed it, it's really hard to apply that finding elsewhere. We don't know if it's product images. We don't know if it's a description placement. We dunno if it's a button. We dunno if it's reviews. And so it'd be, while we can have a short term win, the long term learnings get lost. So we do like to group them together so that we have a clear learning from it, as well as that conversion lift.
Kathleen (16:41):
So break that down for me because I, I get what you're saying around, like, you need, you need to understand what was it that moved the needle so that you're able to apply that in other ways. Right? Mm-hmm <affirmative> but if you're grouping changes on a page, how do you know, like, are you make, are you just then gonna say we have to make all these changes on every page? Or like,
AJ (17:01):
No, it's the learning is like our, the goal of this test was to make the button more prominent on the page. And we did it in three or four different ways. Okay. So if button prominence matters to your visitors, then you can take that theme of button prominence and apply it elsewhere, maybe with some different design elements.
Kathleen (17:18):
Okay. That makes sense. Cuz what I thought I heard you saying was we might change the button. We might adjust the product image. We could change the size of the header font over the description. Like that's not what you're saying, you're saying
AJ (17:30):
No, if we do that, then we all have no idea what to.
Kathleen (17:32):
Right. Okay. That's what I thought. Okay. I think I was confused when you said theme and I was thinking it was like the theme was the whole page, whereas oh
AJ (17:39):
Yeah. That sounds like
Kathleen (17:40):
Your point is the theme is one aspect of the page. Mm-hmm <affirmative> okay. Got it. That makes sense. Um, so can you just share some, I think this is so fascinating, like in theory, but can you share some examples of where you've done this in practice and what kind of an impact it can have?
AJ (17:59):
Mm-hmm <affirmative> yeah, one of the, one of the ones that we did last summer was we were working with a software company and they were trying to figure out what information their paid ad traffic needed to see. So they were drawing a bunch of people to sell the specific software solution and then drawing them into a page with all the product features. And it's what they thought people needed. It's like the product page. It had some highlights about the business, uh, but it's really lengthy and it we've test. We tested a couple different things on it, based on some assumptions we had about what people needed, but things weren't really moving the needle. We were seeing kind of small lifts to conversion, but not the bigger things that we were expecting to find. So we took a step back and did some user research. Um, this is one of my favorite things is to think about who in the organization can be your subject matter expert to interview to kind of save that, um, to have the expertise and to first learn from them before you go do the harder recruit.
So we talked to their sales team, like the frontline sales team who was talking to people as soon as they fill out a lead form, taking them all the way to close and we would ask them, what were, what did you talk about in the first conversation and what did you talk about in the day that you closed the sale and what sort of themes emerged? So the thing that we heard that really surprised us was that they only talked about differentiators. Nobody I was asking about really product features by the time they're making the close people just cared about why should I go with you versus the other competitors in the market? So that was enough justification for us to design a new experience that was very different than before from a content perspective, the design was the same. The organization was generally the same and we focused on showing the diff the three differentiators, the three things that made them different than their competitors. And we increased conversion by over a hundred percent.
Kathleen (19:51):
Wow. And how long did that take, like from when you made the change to, when you saw that impact on conversion?
AJ (19:56):
I think for them it was like more like four to six weeks where we, and we saw it early, but we wanted to make sure we, we weren't just seeing noise early on. So it took about four to six weeks of it running, um, consistently performing that well, and then we rolled it out.
Kathleen (20:11):
Yeah. So, and I guess that's the, the corollary to my earlier conversation around, you know, how many users do you need to, to do good user research? How like how much data do you need on the back end mm-hmm <affirmative> to really be able to trust that a change in metric
AJ (20:26):
Is yeah. Happening is enough. Yeah. Yeah. A lot of people go too soon, right. Where early on you'll just see noisy data and it looks like a big win or a big loss. And if you turn it off, you are making an assumption on not enough data. So we look for three things. We look for two weeks of data because a Sunday user is different than a Monday user or context, personal context, depending on what the product is. Uh, we look for hitting 95% confidence. So we're seeing significantly this difference is different than this difference. So these two groups are not overlapping at all in the data set. And then we also look for a minimum sample, which is based on previous performance of the page, how much would be a reasonable amount for us to see a lift and know that it's, uh, a meaningful one and not just,
Kathleen (21:14):
Is there like a ratio there, like, so if a page gets, I don't know, a thousand visits in a month, is there some percentage of that that you're looking for?
AJ (21:21):
We're typically it's a little bit more nuance than that because you're looking, there's some assumptions baked into how these, um, how, if you can find calculators, that'll do this for you as well. They're public facing ones called duration calculators. And there's some assumptions that you need to make about what percentage confidence you're comfortable with. I can kind of get into some of that more technical stuff, but basically there's a couple of variables that can influence the answer to that. But the rule of humor,
Kathleen (21:48):
Me, humor me and go down that path for a minute, cuz I like to get a little bit nerdy on these things.
AJ (21:52):
So yeah. Yeah. So basically we can, if we don't have a lot of traffic, we might be comfortable with a lower confidence because we just wanna learn fast, even though we're comfortable with it being a little bit different. Um, so for some clients with smaller sites, we might do like 90 or 85% confidence. So that's gonna lower how much time the test needs to run because we just don't need as much data to get to that. So if you, I like to think about in surveys cuz that's the first time I encountered it. And I think a lot of us see surveys for political things. And I think it's more tangible than a, a website traffic. So for estimating, like who's gonna vote for who at the, for presidential election, we need to have a representative sample and we don't need to talk to every single person in the United States.
So we wanna have a margin of error that we're comfortable with. That's why you often see that published is what is the margin of error on this data. And that influences how many people had to be involved. So if we want a really small margin of error and a really high confidence, we would need to talk to almost everyone. But if we, as soon as we start scaling that back to, okay, we really can only afford to talk to 2000 people. Does that give us about a 4% margin of error? Right? So depending on the total sample size of what the demographics are, the a is, then that will determine what those, those bars would be that are realistic.
Kathleen (23:09):
And if somebody wanted to calculate this automatically, you're saying there are calculators on the internet that you can go out and mm-hmm <affirmative> what would you Google to find one of these
AJ (23:17):
<laugh> yes. If I was doing a survey, there's a bunch of like survey sample size calculators that will guide you through that. Uh, in AB testing it's duration calculations, we'll help you. We'll take a look at what traffic has been, what the conversion rates are, what a likely difference is between the conversion rates. So if you're seeing that you would need to hit a 200% lift on your variable within the 30 days. You're not likely to see that, um, without a really big change. So you'll pick those, those elements and it will tell you how much traffic you need.
Kathleen (23:49):
Oh, that's awesome. I love that for those of us who are not statisticians by training, <laugh> that's super helpful. Um, that's really interesting. Okay. So then I guess my next question is around like the theme of is the juice worth, the squeeze, right? Mm-hmm <affirmative> because this is a phenomenal process that can have definitely an important impact on your results on your website. But I imagine that there's a threshold of like, we're just gonna make this change and it's not worth going through the process. So like how do you understand when is it worth investing the time and the resources in a process like this?
AJ (24:26):
Mm-hmm <affirmative> I think that like the single variable to start with is how much traffic do you have. So if you're doing AB testing and you only have a few thousand people per month, your changes need to be really big for them to be detectable and will probably need to run for quite a while. So it's maybe worth doing an AB test for something really big. Like we're doing a new homepage versus a brand new design with a brand new message. You still won't know why it's happening though. And then, uh, as your traffic levels go up, let's say you have 30,000, 50,000 a month. You can still make grouped changes and see the impact of them. Uh, but you probably don't wanna be testing things in your footer or on pages people don't reach or that probably won't impact whether or not they're gonna do it.
And then once you get to like a hundred thousand or million views per month, you can really start testing almost anything on the site and understanding the impact, but it still might be small. And so then it's a question of, are you getting collective ROI from all that together? So the way we help our clients figure that out is that we measure the impact of each test. And we say, if this change had been in place and we saw this 20% lift to conversion from add to cart to order, what would that dollar amount be worth if it had been in place last year? So each and every test we run has a dollar amount assigned to it, fairly conservative, cuz you would expect a test. If you roll out that change, it would have an impact that's positive for longer than a year. Um, but it's a good way to kind of back in and say, okay, I really am seeing a really good like 10 to one, 15 to one ROI on the pro in the program itself.
Kathleen (25:59):
Got it. That makes sense. So interesting. I thank you for bearing with me <laugh> as I ask all these dumb questions. Um, alright. I we're gonna shift gears cuz we're coming up towards the sort of the end of our time and I wanna make sure I have time to ask you some of the questions I always ask my guests. Um, the first being that the marketers, I talk to say one of their struggles is it's like drinking from a fire hose, trying to keep up with all the changes in the world of digital marketing. So how do you personally stay educated and stay on top of it all?
AJ (26:31):
Yeah, I think, I think a lot of the answers would be around like reading you might do or blogs you might follow. And I do a fair amount of that, but what I find myself really going back to are some foundational things like books that are really subject specific. So I have a book on typography that I reference when I'm thinking about what are some of the problems with this page and the, the styling of it.
Kathleen (26:51):
Oh, what's the name of that book?
AJ (26:52):
Do you know? Uh, thinking with type it's a book that I went, I did a design course way back when and I still reference it and there's also some form. Um, let me see if I have the name of it. The form, there's a form design book called web form design as literal as it gets, um, by Luke w and that's a phenomenal book if you've never thought about form design and placement of elements and things like that. So there's a lot of these books that are hyper focused. They're very thoughtful. They talk about the pros and the cons of them. So it's, it's actually not about keeping up with the latest trends, but making sure the basic principles are there. And then for keeping up with the latest trends, honestly, for me, it's like going to meetups and just talking to people and hearing what other marketers are struggling with or what they're trying out, because it fast tracks like past some of the, the noise and other ways of getting information to just like, what are people really experiencing? It might be the researcher in me that's really drawn to learning from people
Kathleen (27:48):
Directly. AJ, I feel like you're like my sole sister, anybody who reads entire books on web forms and type is like my kind of person. So <laugh>, I love it. I love those two recommendations. Thank you for sharing those. Mm-hmm <affirmative> um, second question. This podcast is all about inbound marketing, which I define very broadly as anything that naturally attracts the right kind of customer to you. And so I'm wondering if there's a particular company or individual that you think is really setting the bar for what it means to be a great inbound marketer these days.
AJ (28:19):
Yeah. I think that there's a certain category that's drawing me in right now, which hasn't hooked me yet, but it's getting my attention really consistently. So there's been this phase of you Bo box boxes that will be like subscription boxes where a stylist will style clothes for you and that's okay. But the ones that really get my attention are the capsules where it's like, all these pieces are interchangeable and the way that they show the imagery is really captivating. Cause it's like, here are the five pieces of clothes and here's the 30 ways you can wear it. And I, every single time click on it, cause I'm like, oh, I'd love the idea of like five pieces, 10 pieces that can just work everywhere and is great for travel. That's a big pain point.
Kathleen (28:57):
I'm gonna have to now Google this cuz I have not seen these things and this sounds like something I could use in my life. Great. All right. Well thank you for sharing that. Um, if somebody's interested in learning more about Experiment Zone or connecting with you and asking a question, what's the best way for them to do that?
AJ (29:15):
I think reaching out through our website's the easiest way. So experimentzone.com. We've got contact form. You can reach out that way. I'm always happy to connect on LinkedIn as well. So AJ Davis on LinkedIn.
Kathleen (29:25):
All right. Great. Well thank you for joining me this week. If you're listening and you enjoyed this episode, I would love it. If you'd head to apple podcast and leave the podcast a review, that's how other folks find us. And if you know someone else doing great inbound marketing work, please tweet me at @Kathleenlbooth because I would love to make them my next guest. In the meantime, you'll find all the links to Experiment Zone and to AJ in the show notes, which are available kathleen-booth.com. So head there, if you wanna learn more or connect with AJ, uh, and thank you AJ for joining me, this was so interesting. I really appreciate you coming on the show.
AJ (30:01):
Thank you. Thanks for great questions. It was a lot of fun.