Problemas com Chrome Headless
Divulgue vagas no qa-brasil/vagas
Oportunidade Freelancer - Tester especialista em Katalon
[Dica de estudo] - CTFL-MBT Model Based Test
Treinamento e Eventos2
Robot framework para teste e2e
Gerar allure-report no aws s3
Executar teste dentro do Modal
Automatizado - Clicar sem ID
Teste de Stress
Dúvida Execução por Tags
[Dúvida] Estamos usando testes de integração da maneira correta?
Executar tags em features diferentes no Cucumber
Como abrir todos os link de uma página
Série de dicas sobre Appium!
Artigos e Tutoriais10
Problemas com o nightwatch
Element click intercepted
Dica de ferramentas para testes visuais
Erro ao executar testes automatizados com o Chrome [Capybara + Selenium + Docker]
Ajuda com Curso
Are you stable, or complacent? Is it time to experiment yet?
TLDR; If you are not sure that you should experiment with new techniques then find ways to monitor the domain first, you might be able to learn from someone else’s experience.
When things are stable, and they are going well, a hard question to answer is “Is it time to experiment?”.
I have default tools and techniques which I know I can use to quickly achieve good results. As an example, when I want to start a new web server project for myself or as a training exercise I will use Java and Spark Framework. I know how to get results with the framework, I know Java, I can achieve results quickly. Have I found my peak and ideal solution? Or am I complacent?
Perhaps there is a better solution? Perhaps I should experiment?
There are some obvious points at which I will experiment:
- when my chose approach or solution has obvious limitations for the task I am about to undertake
- if I’m not sure that my current solution will handle the next task
With known limitations I’m forced to experiment. I have no choice.
If I’m not sure, then the first experiment I’ll engage in is with my current solution, and if it works then I probably won’t go looking for a new solution.
What about the middle ground when - things are working fine - but what if there is a better way that I am yet unaware of? Should I go hunt down other solutions to experiment with them?
I used to…
- I used to monitor for new tools
- I used to try out new tools all the time to see if they were better than those I was using.
But the problem was that I would then:
- spend a lot of time monitoring tools
- spend a lot of time superficially evaluating tools,
- spend time switching between tools
I didn’t measure the time all this was taking. I didn’t consider the Opportunity Cost of this experimentation - what I could have done instead. I didn’t evaluate the benefit of having conducted the experiment i.e. having found and switched to another tool - how ‘much’ am I better off? how much faster is this task to complete?
- measure the time experimentation takes,
- consider the Opportunity Cost of experimentation,
- evaluate the benefit of having conducted the experiment
I decided to cut down on the tool monitoring and evaluating and instead go deep with the tools I had, and to find a way to monitor - not the tools, but instead monitor - other people’s experiences with using tools.
This means that instead of monitoring new lists of tools I would find people in the domain that I was interested in, and monitor their experiences of using tools and techniques.
So if I was interested in new lightweight Java HTTP Servers I would:
- find blog aggregators for web development, HTTP servers, Java libraries and subscribe to those,
- create a google alert for search terms such as “lightweight http server java”,
- subscribe to specific blogs for products, tools and techniques that I already know about.
I changed the monitoring approach and then tried to find ways of learning from other’s experience rather than direct experience (which is generally more costly in time, but clearly you learn more when you engage in it).
You can also use this with your current work by publishing what you do and what you think. Then you can receive additional comments from other people’s experience.
Prior to experimenting:
- set an aim for what you are trying to learn,
- decide on the fixed parameters in your experiment e.g. what other tools you will use, what design approach, etc.
- investigate for any prior work that has tried to achieve the same aims and with the same parameters, (this might mean you don’t have to actually conduct the experiment)
- set a time deadline,
- keep your experiment focused on what you are trying to learn,
- if you spot opportunity for new investigations then note them down, but don’t change the terms of the experiment yet, consider those objectively later
“Is it time to experiment?” might mean - “How can I better monitor this domain?”.
You can use those nagging doubts to expand your signal monitoring and ongoing learning and education.