Can you automate everything in software testing

The short answer: No. You can automate individual test cases, but you can't automate testing itself. Trying to automate everything invariably produces gaps in areas that are not amenable to automation. Conversely, some projects struggle from a lack of automation where many manual tests need to be re-run frequently. 

There is a happy medium in which automation features significantly but is not mistaken as the total solution to testing needs. 

What should be automated

As with testing in general, the areas of highest risk are usually the areas of greatest value. That could be payment transactions, personal information, dashboard rendering, logo display, etc. Automation is great for tests that you are interested in seeing the results for every build or check-in, but don't want the effort and delay of manual execution every time. 

What should NOT be automated

There are tests that you do not need to run very often, or are not suitable to inclusion in a testing suite where you want fast results. For example, some tests that take a long time to complete, like a large database sync or a slow download, and are often better handled on an as-needed basis. The entire automation suite should also be time monitored. An overloaded suite (too many tests or too many slow tests) can reduce the usefulness of timely feedback. 

What tests are not readily amenable to automation?

There are several types of brittle tests, where the outcome is variable, or which may fail intermittently. Some basic automation strategies can deal with these. Variable outputs can still be determined to be a pass if they match a pass scenario. Intermittently failing tests, for example due to network issues, can repeat a limited number of times until a pass is achieved. 

But do you want to handle these tests in this way? Is there a risk that in pursuit of an automation PASS? You might be covering up a product issue that might merit further investigation. If the output is variable, that could be a sign of an unstable product that will not satisfy the customer's needs. If the test fails four out of every five times and always needs to be re-run to PASS, do you need to know this and do something about it? This might be OK for a simple function that invisibly retries on failure and still achieves the customer need. But there will be cases where you might be better off not automating certain tests and instead maintaining them in a manual test suite.

So, how do you choose what to automate?

Unsurprisingly, getting the best balance comes down to automating everything that makes good sense to automate and deciding when to stop. 

Follow the testing pyramid to help determine the right balance.
Follow the testing pyramid to help determine the right balance. 

There is a useful principal in the testing pyramid: It is good to have many fast, simple tests such as unit tests, but we should try to limit the number of more complex tests, especially end-to-end and UI tests. It is also good practice to not duplicate the unit tests, and this will also keep run times and maintenance overhead low.