apify: Legacy Apifier runtime for JavaScript
This package has been moved to apify.
Legacy Apifier runtime for JavaScript
This package has been moved to apify.
All notable changes to this project will be documented in this file. See Conventional Commits for commit guidelines.
_timeoutAndRetry (#3206) (9c1cf6d), closes /github.com/apify/crawlee/pull/3188#discussion_r2410256271AdaptivePlaywrightCrawler (#3188) (9569d19)launchOptions with useIncognitoPages (#3181) (84a4b70), closes /github.com/apify/crawlee/issues/3173#issuecomment-3346728227 #3173 #3173systemInfoV2 by default (#3208) (617a343)ImpitHttpClient respects the internal Request timeout (#3103) (a35376d)proxyUrls list can contain null (#3142) (dc39cc2), closes #3136exportData calls on empty datasets (#3115) (298f170), closes #2734maxCrawlDepth with a custom enqueueLinks transformRequestFunction (#3159) (e2ecb74)collectAllKeys option for BasicCrawler.exportData (#3129) (2ddfc9c), closes #3007TandemRequestProvider for combined RequestList and RequestQueue usage (#2914) (4ca450f), closes #2499Note: Version bump only for package @crawlee/root
pre|postLaunchHooks prematurely (#3062) (681660e)exclude option in enqueueLinksByClickingElements (#3058) (013eb02)HttpCrawler (#3060) (b5fcd79), closes /github.com/apify/crawlee/blob/f68d2a95d67cc6230122dc1a5226c57ca23d0ae7/packages/browser-crawler/src/internals/browser-crawler.ts#L481-L486 #3029maxCrawlDepth crawler option (#3045) (0090df9), closes #2633onSkippedRequest for AdaptivePlaywrightCrawler.enqueueLinks (#3043) (fc23d34), closes #3026 #3039limit checking (#3038) (2774124), closes #3037Sitemap.tryCommonNames (#3015) (64a090f), closes #2884addRequests methods (#3013) (a4ab748), closes #2980AdaptivePlaywrightCrawler (#2987) (76431ba), closes #2899KVS.listKeys() prefix and collection parameters (#3001) (5c4726d), closes #2974ImpitHttpClient (#2991) (120f0a7)PlaywrightGotoOptions won't result in unknown when playwright is not installed (#2995) (93eba38), closes #2994body from iframe elements (#2986) (c36166e), closes #2979MinimumSpeedStream and ByteCounterStream helpers (#2970) (921c4ee)systemInfoV2 in snapshotter (#2961) (4100eab), closes #2958KVS.setRecord calls (#2962) (d31d90e)_createPageForBrowser in browser pool (#2950) (27ba74b), closes #2789[@apilink](https://github.com/apilink) to [@link](https://github.com/link) on build (#2949) (abe1dee), closes #2717autoscaledPoolOptions.isTaskReadyFunction option (#2948) (fe2d206), closes #2922BrowserCrawler (#2908) (3107e55), closes #2851RobotsFile to RobotsTxtFile (#2913) (3160f71), closes #2910406 as other 4xx status codes in HttpCrawler (#2907) (b0e6f6d), closes #2892context.body (#2838) (32d6d0e), closes #2401camoufox template correctly (#2864) (a9d008c), closes #2863handleCloudflareChallenge helper (#2865) (9a1725f)impit streaming (#2833) (af2fe23), closes #2756CrawlerRunOptions before passing them to addRequests (#2803) (02a598c), closes #2802BasicCrawler tidy-up on CriticalError (#2817) (53331e8), closes #2807impit-based HttpClient implementation (#2787) (61d7ffa)BasicCrawler.stop() (#2792) (af2966f), closes #2777.trim() urls from pretty-printed sitemap.xml files (#2709) (802a6fe), closes #2698fingerprintGeneratorOptions types (#2705) (fcb098d), closes #2703forefront request fetching in RQv2 (#2689) (03951bd), closes #2669prolong- and deleteRequestLock forefront option (#2690) (cba8da3), closes #2681 #2689 #2669.isFinished() before RequestList reads (#2695) (6fa170f)UInt8Array in KVS.setValue() (#2682) (8ef0e60)errorHandler for session errors (#2683) (7d72bcb), closes #2678username and password (#2696) (0f0fcc5)ignoreHTTPSErrors to acceptInsecureCerts to support v23 (#2684) (f3927e6)forefront option in MemoryStorage's RequestQueue (#2681) (b0527f9), closes #2669SitemapRequestList.teardown() doesn't break persistState calls (#2673) (fb2c5cd), closes /github.com/apify/crawlee/blob/f3eb99d9fa9a7aa0ec1dcb9773e666a9ac14fb76/packages/core/src/storages/sitemap_request_list.ts#L446 #2672FACEBOOK_REGEX to match older style page URLs (#2650) (a005e69), closes #2216inProgress cache, rely solely on locked states (#2601) (57fcb08)globs & regexps for SitemapRequestList (#2631) (b5fd3a9)iframe expansion to parseWithCheerio in browsers (#2542) (328d085), closes #2507ignoreIframes opt-out from the Cheerio iframe expansion (#2562) (474a8dc)@crawlee/browser package (#2532) (3357c7f)useState in adaptive crawler (#2530) (7e195c1)context.request.loadedUrl and id as required inside the request handler (#2531) (2b54660)waitForAllRequestsToBeAdded option to enqueueLinks helper (925546b), closes #2318useState implementation into crawling context (eec4a71)crawler.log publicly accessible (#2526) (3e9e665)launchOptions on type level (0519d40), closes #1849crawler.log when creating child logger for Statistics (0a0d75d), closes #2412requestHandler is provided in AdaptiveCrawler (#2518) (31083aa)waitForSelector context helper + parseWithCheerio in adaptive crawler (#2522) (6f88e73)URL_NO_COMMAS_REGEX regexp to allow single character hostnames (#2492) (ec802e8), closes #2487EnqueueStrategy.All erroring with links using unsupported protocols (#2389) (8db3908)SystemInfo events every second (#2454) (1fa9a66)content-type check breaks on content-type parameters (#2442) (db7d372)FileDownload "crawler" (#2435) (d73756b)RequestQueue v2 the default queue, see more on Apify blog (#2390) (41ae8ab), closes #2388RequestList memory footprint (#2466) (12210bd)crawler.addRequests() (#2456) (6da86a8)AutoscaledPool.notify (#2422) (6f2e6b0), closes #2421notify in addRequests() (#2425) (c4d5446), closes #2421setValue (#2411) (9089bf1)networkidle to waitUntil in gotoExtended (#2399) (5d0030d), closes #2398application/xml (#2408) (cbcf47a)createAdaptivePlaywrightRouter utility (#2415) (cee4778), closes #2407tieredProxyUrls for ProxyConfiguration (#2348) (5408c7f)newUrlFunction for ProxyConfiguration (#2392) (330598b), closes #2348 #2065parseWithCheerio helper (#2396) (a05b3a9)RequestQueueV2 (#2376) (ffba095)createRequests works correctly with exclude (and nothing else) (#2321) (048db09)csv-stringify and fs-extra (#2326) (718959d), closes /github.com/redabacha/crawlee/blob/2f05ed22b203f688095300400bb0e6d03a03283c/.eslintrc.json#L50page.waitForTimeout() with sleep() (52d7219), closes #2335puppeteer@v22 (#2337) (3cc360a)KeyValueStore.recordExists() (#2339) (8507a65)userAgent parameter to RobotsFile.isAllowed() + RobotsFile.from() helper (#2338) (343c159)retryOnBlocked doesn't override the blocked HTTP codes (#2243) (81672c3)--no-purge (#2244) (83f3179)retryOnBlocked (#2252) (e19a773), closes #2249got-scraping (012fc9e)skipNavigation option to enqueueLinks (#2153) (118515d)--no-sandbox flag for webkit launcher (#2148) (1eb2f08), closes #1797RequestList.open() + improve docs (#2158) (c5a1b07)extractUrls to split the text line by line first (#2122) (7265cd7)purgeDefaultStorage to wipe the storage on each call (#2060) (4831f07)inProgress cache when delaying requests via sameDomainDelaySecs (#2045) (f63ccc0)RequestQueue instance (845141d), closes #2043Request constructor options typesafe (#2034) (75e7d65)@crawlee/* packages versions in crawlee metapackage (#2040) (61f91c7), closes /github.com/apify/crawlee/pull/2002#issuecomment-1680091061DELETE requests in HttpCrawler (#2039) (7ea5c41), closes #1658Request.maxRetries to the RequestOptions interface (#2024) (6433821)vitest (#2004) (d2e098c), closes #1999requestsFromUrl) to the queue in batches (418fbf8), closes #1995enqueueLinks explicitly provided via urls option (#2014) (cbd9d08), closes #2005closeCookieModals context helper for Playwright and Puppeteer (#1927) (98d93bb)sameDomainDelay (#2003) (e796883), closes #1993RequestQueue.addBatchedRequests() in enqueueLinks helper (4d61ca9), closes #1995internalTimeoutMillis in addition to requestHandlerTimeoutMillis (#1981) (8122622), closes #1766RequestQueue.addRequestsBatched() that is non-blocking (#1996) (c85485d), closes #1995IncomingMessage with PlainResponse for context's response (#1973) (2a1cc7f), closes #1964<base> when enqueuing (#1936) (aeef572)requestsFromUrl to RequestQueue (#1917) (7f2557c)Request.maxRetries to allow overriding the maxRequestRetries (#1925) (c5592db)RequestQueue (#1899) (063dcd1)SessionPool (#1881) (db069df)parseWithCheerio helper to HttpCrawler (#1906) (ff5f76f)runScripts are enabled (806de31)enqueueLinks in http crawlers when parsing fails (fd35270)parseWithCheerio context helper to cheerio crawler (b336a73)parseWithCheerio context helper (c8f0796)proxyUrl to DownloadListOfUrlsOptions (779be1e), closes #1780enqueueLinks in browser crawlers (#1803) (5ac336c)forefront (#1816) (b68e86a), closes #1787setStatusMessage (#1790) (c318980)exclude option to enqueueLinks (#1786) (2e833dc), closes #1785QueueOperationInfo export to the core package (5ec6c24)userData option in enqueueLinksByClickingElements (#1749) (736f85d), closes #1617request.userData when creating new request object (#1728) (222ef59), closes #1725pendingRequestCount in request queue (#1765) (946535f), closes /github.com/apify/crawlee/blob/master/packages/memory-storage/src/resource-clients/request-queue.ts#L291-L298tslib (27e96c8), closes #1747ow (bf0e03c), closes #1716forefront option to all enqueueLinks variants (#1760) (a01459d), closes #1483utils.playwright.blockRequests warning message (#1632) (76549eb)playwright is not installed (#1637) (de9db0c)set and useStorageClient shortcuts to Configuration (2e66fa2)KeyValueStore.getValue with defaultValue (#1541) (e3cb509)label in enqueueLinksByClickingElements options (#1525) (18b7c25)request.noRetry after errorHandler (#1542) (2a2040e)this instead of the class (#1596) (2b14eb7)Cookie from crawlee metapackage (7b02ceb)Dataset.exportToValue (#1553) (acc6344)Dataset.getData() shortcut (522ed6e)utils.downloadListOfUrls to crawlee metapackage (7b33b0a)utils.parseOpenGraph() (#1555) (059f85e)utils.playwright.compileScript (#1559) (2e14162)utils.playwright.infiniteScroll (#1543) (60c8289), closes #1528utils.playwright.saveSnapshot (#1544) (a4ceef0)useState helper (#1551) (2b03177)Dataset.exportToValue (#1564) (a7c17d4)forefront option to enqueueLinks helper (f8755b6), closes #1595INPUT.json to support comments (#1538) (09133ff)headless option in browser crawlers (#1455)CheerioCrawlerOptions type more loose (d871d8c)utils.playwright.blockRequests() (#1447)/INPUT.json files for KeyValueStore.getInput() (#1453)RetryRequestError + add error to the context for BC (#1443)keepAlive to crawler options (#1452)UserData type argument to CheerioCrawlingContext and related interfaces (1424)desiredConcurrency to the value of maxConcurrency (bcb689d)crawler.run() (9d62d56)CheerioCrawler (07b7e69)ow in @crawlee/cheerio package (be59f99)crawlee@^3.0.0 in the CLI templates (6426f22)desiredConcurrency: 10 as the default for CheerioCrawler (1428)Router via use method (1431)JSONData generic type arg from CheerioCrawler in (#1402)storage in (#1403)FailedRequestHandler to ErrorHandler in (#1410)CheerioCrawler in (#1411)headless option to BrowserCrawlerOptions in (#1412)enqueueLinks in browser crawler on page without any links in (385ca27)This section summarizes most of the breaking changes between Crawlee (v3) and Apify SDK (v2). Crawlee is the spiritual successor to Apify SDK, so we decided to keep the versioning and release Crawlee as v3.
Up until version 3 of apify, the package contained both scraping related tools and Apify platform related helper methods. With v3 we are splitting the whole project into two main parts:
crawlee package on NPMapify package on NPMMoreover, the Crawlee library is published as several packages under @crawlee namespace:
@crawlee/core: the base for all the crawler implementations, also contains things like Request, RequestQueue, RequestList or Dataset classes@crawlee/basic: exports BasicCrawler@crawlee/cheerio: exports CheerioCrawler@crawlee/browser: exports BrowserCrawler (which is used for creating @crawlee/playwright and @crawlee/puppeteer)@crawlee/playwright: exports PlaywrightCrawler@crawlee/puppeteer: exports PuppeteerCrawler@crawlee/memory-storage: @apify/storage-local alternative@crawlee/browser-pool: previously browser-pool package@crawlee/utils: utility methods@crawlee/types: holds TS interfaces mainly about the StorageClientAs Crawlee is not yet released as
latest, we need to install from thenextdistribution tag!
Most of the Crawlee packages are extending and reexporting each other, so it's enough to install just the one you plan on using, e.g. @crawlee/playwright if you plan on using playwright - it already contains everything from the @crawlee/browser package, which includes everything from @crawlee/basic, which includes everything from @crawlee/core.
npm install crawlee@nextOr if all we need is cheerio support, we can install only @crawlee/cheerio
npm install @crawlee/cheerio@nextWhen using playwright or puppeteer, we still need to install those dependencies explicitly - this allows the users to be in control of which version will be used.
npm install crawlee@next playwright
# or npm install @crawlee/playwright@next playwrightAlternatively we can also use the crawlee meta-package which contains (re-exports) most of the @crawlee/* packages, and therefore contains all the crawler classes.
Sometimes you might want to use some utility methods from
@crawlee/utils, so you might want to install that as well. This package contains some utilities that were previously available underApify.utils. Browser related utilities can be also found in the crawler packages (e.g.@crawlee/playwright).
Both Crawlee and Apify SDK are full TypeScript rewrite, so they include up-to-date types in the package. For your TypeScript crawlers we recommend using our predefined TypeScript configuration from @apify/tsconfig package. Don't forget to set the module and target to ES2022 or above to be able to use top level await.
The
@apify/tsconfigconfig hasnoImplicitAnyenabled, you might want to disable it during the initial development as it will cause build failures if you left some unused local variables in your code.
`json title="tsconfig.json"
{
"extends": "@apify/tsconfig",
"compilerOptions": {
"module": "ES2022",
"target": "ES2022",
"outDir": "dist",
"lib": ["DOM"]
},
"include": [
"./src/*/"
]
}
#### Docker build
For `Dockerfile` we recommend using multi-stage build, so you don't install the dev dependencies like TypeScript in your final image:
```dockerfile title="Dockerfile"
# using multistage build, as we need dev deps to build the TS source code
FROM apify/actor-node:16 AS builder
# copy all files, install all dependencies (including dev deps) and build the project
COPY . ./
RUN npm install --include=dev \
&& npm run build
# create final image
FROM apify/actor-node:16
# copy only necessary files
COPY --from=builder /usr/src/app/package*.json ./
COPY --from=builder /usr/src/app/README.md ./
COPY --from=builder /usr/src/app/dist ./dist
COPY --from=builder /usr/src/app/apify.json ./apify.json
COPY --from=builder /usr/src/app/INPUT_SCHEMA.json ./INPUT_SCHEMA.json
# install only prod deps
RUN npm --quiet set progress=false \
&& npm install --only=prod --no-optional \
&& echo "Installed NPM packages:" \
&& (npm list --only=prod --no-optional --all || true) \
&& echo "Node.js version:" \
&& node --version \
&& echo "NPM version:" \
&& npm --version
# run compiled code
CMD npm run start:prodPreviously we had a magical stealth option in the puppeteer crawler that enabled several tricks aiming to mimic the real users as much as possible. While this worked to a certain degree, we decided to replace it with generated browser fingerprints.
In case we don't want to have dynamic fingerprints, we can disable this behaviour via useFingerprints in browserPoolOptions:
const crawler = new PlaywrightCrawler({
browserPoolOptions: {
useFingerprints: false,
},
});Previously, if we wanted to get or add cookies for the session that would be used for the request, we had to call session.getPuppeteerCookies() or session.setPuppeteerCookies(). Since this method could be used for any of our crawlers, not just PuppeteerCrawler, the methods have been renamed to session.getCookies() and session.setCookies() respectively. Otherwise, their usage is exactly the same!
When we store some data or intermediate state (like the one RequestQueue holds), we now use @crawlee/memory-storage by default. It is an alternative to the @apify/storage-local, that stores the state inside memory (as opposed to SQLite database used by @apify/storage-local). While the state is stored in memory, it also dumps it to the file system, so we can observe it, as well as respects the existing data stored in KeyValueStore (e.g. the INPUT.json file).
When we want to run the crawler on Apify platform, we need to use Actor.init or Actor.main, which will automatically switch the storage client to ApifyClient when on the Apify platform.
We can still use the @apify/storage-local, to do it, first install it pass it to the Actor.init or Actor.main options:
@apify/storage-localv2.1.0+ is required for Crawlee
import { Actor } from 'apify';
import { ApifyStorageLocal } from '@apify/storage-local';
const storage = new ApifyStorageLocal(/* options like `enableWalMode` belong here */);
await Actor.init({ storage });Previously the state was preserved between local runs, and we had to use --purge argument of the apify-cli. With Crawlee, this is now the default behaviour, we purge the storage automatically on Actor.init/main call. We can opt out of it via purge: false in the Actor.init options.
Some options were renamed to better reflect what they do. We still support all the old parameter names too, but not at the TS level.
handleRequestFunction -> requestHandlerhandlePageFunction -> requestHandlerhandleRequestTimeoutSecs -> requestHandlerTimeoutSecshandlePageTimeoutSecs -> requestHandlerTimeoutSecsrequestTimeoutSecs -> navigationTimeoutSecshandleFailedRequestFunction -> failedRequestHandlerWe also renamed the crawling context interfaces, so they follow the same convention and are more meaningful:
CheerioHandlePageInputs -> CheerioCrawlingContextPlaywrightHandlePageFunction -> PlaywrightCrawlingContextPuppeteerHandlePageFunction -> PuppeteerCrawlingContextSome utilities previously available under Apify.utils namespace are now moved to the crawling context and are context aware. This means they have some parameters automatically filled in from the context, like the current Request instance or current Page object, or the RequestQueue bound to the crawler.
One common helper that received more attention is the enqueueLinks. As mentioned above, it is context aware - we no longer need pass in the requestQueue or page arguments (or the cheerio handle $). In addition to that, it now offers 3 enqueuing strategies:
EnqueueStrategy.All ('all'): Matches any URLs foundEnqueueStrategy.SameHostname ('same-hostname') Matches any URLs that have the same subdomain as the base URL (default)EnqueueStrategy.SameDomain ('same-domain') Matches any URLs that have the same domain name. For example, https://wow.an.example.com and https://example.com will both be matched for a base url of https://example.com.This means we can even call enqueueLinks() without any parameters. By default, it will go through all the links found on current page and filter only those targeting the same subdomain.
Moreover, we can specify patterns the URL should match via globs:
const crawler = new PlaywrightCrawler({
async requestHandler({ enqueueLinks }) {
await enqueueLinks({
globs: ['https://apify.com/*/*'],
// we can also use `regexps` and `pseudoUrls` keys here
});
},
});RequestQueue instanceAll crawlers now have the RequestQueue instance automatically available via crawler.getRequestQueue() method. It will create the instance for you if it does not exist yet. This mean we no longer need to create the RequestQueue instance manually, and we can just use crawler.addRequests() method described underneath.
We can still create the
RequestQueueexplicitly, thecrawler.getRequestQueue()method will respect that and return the instance provided via crawler options.
crawler.addRequests()We can now add multiple requests in batches. The newly added addRequests method will handle everything for us. It enqueues the first 1000 requests and resolves, while continuing with the rest in the background, again in a smaller 1000 items batches, so we don't fall into any API rate limits. This means the crawling will start almost immediately (within few seconds at most), something previously possible only with a combination of RequestQueue and RequestList.
// will resolve right after the initial batch of 1000 requests is added
const result = await crawler.addRequests([/* many requests, can be even millions */]);
// if we want to wait for all the requests to be added, we can await the `waitForAllRequestsToBeAdded` promise
await result.waitForAllRequestsToBeAdded;Previously an error thrown from inside request handler resulted in full error object being logged. With Crawlee, we log only the error message as a warning as long as we know the request will be retried. If you want to enable verbose logging like in v2, use the CRAWLEE_VERBOSE_LOG env var.
requestAsBrowserIn v1 we replaced the underlying implementation of requestAsBrowser to be just a proxy over calling got-scraping - our custom extension to got that tries to mimic the real browsers as much as possible. With v3, we are removing the requestAsBrowser, encouraging the use of got-scraping directly.
For easier migration, we also added context.sendRequest() helper that allows processing the context bound Request object through got-scraping:
const crawler = new BasicCrawler({
async requestHandler({ sendRequest, log }) {
// we can use the options parameter to override gotScraping options
const res = await sendRequest({ responseType: 'json' });
log.info('received body', res.body);
},
});sendRequest()?The useInsecureHttpParser option has been removed. It's permanently set to true in order to better mimic browsers' behavior.
Got Scraping automatically performs protocol negotiation, hence we removed the useHttp2 option. It's set to true - 100% of browsers nowadays are capable of HTTP/2 requests. Oh, more and more of the web is using it too!
In the requestAsBrowser approach, some of the options were named differently. Here's a list of renamed options:
payloadThis options represents the body to send. It could be a string or a Buffer. However, there is no payload option anymore. You need to use body instead. Or, if you wish to send JSON, json. Here's an example:
// Before:
await Apify.utils.requestAsBrowser({ …, payload: 'Hello, world!' });
await Apify.utils.requestAsBrowser({ …, payload: Buffer.from('c0ffe', 'hex') });
await Apify.utils.requestAsBrowser({ …, json: { hello: 'world' } });
// After:
await gotScraping({ …, body: 'Hello, world!' });
await gotScraping({ …, body: Buffer.from('c0ffe', 'hex') });
await gotScraping({ …, json: { hello: 'world' } });ignoreSslErrorsIt has been renamed to https.rejectUnauthorized. By default, it's set to false for convenience. However, if you want to make sure the connection is secure, you can do the following:
// Before:
await Apify.utils.requestAsBrowser({ …, ignoreSslErrors: false });
// After:
await gotScraping({ …, https: { rejectUnauthorized: true } });Please note: the meanings are opposite! So we needed to invert the values as well.
header-generator optionsuseMobileVersion, languageCode and countryCode no longer exist. Instead, you need to use headerGeneratorOptions directly:
// Before:
await Apify.utils.requestAsBrowser({
…,
useMobileVersion: true,
languageCode: 'en',
countryCode: 'US',
});
// After:
await gotScraping({
…,
headerGeneratorOptions: {
devices: ['mobile'], // or ['desktop']
locales: ['en-US'],
},
});timeoutSecsIn order to set a timeout, use timeout.request (which is milliseconds now).
// Before:
await Apify.utils.requestAsBrowser({
…,
timeoutSecs: 30,
});
// After:
await gotScraping({
…,
timeout: {
request: 30 * 1000,
},
});throwOnHttpErrorsthrowOnHttpErrors → throwHttpErrors. This options throws on unsuccessful HTTP status codes, for example 404. By default, it's set to false.
decodeBodydecodeBody → decompress. This options decompresses the body. Defaults to true - please do not change this or websites will break (unless you know what you're doing!).
abortFunctionThis function used to make the promise throw on specific responses, if it returned true. However, it wasn't that useful.
You probably want to cancel the request instead, which you can do in the following way:
const promise = gotScraping(…);
promise.on('request', request => {
// Please note this is not a Got Request instance, but a ClientRequest one.
// https://nodejs.org/api/http.html#class-httpclientrequest
if (request.protocol !== 'https:') {
// Unsecure request, abort.
promise.cancel();
// If you set `isStream` to `true`, please use `stream.destroy()` instead.
}
});
const response = await promise;Previously, you were able to have a browser pool that would mix Puppeteer and Playwright plugins (or even your own custom plugins if you've built any). As of this version, that is no longer allowed, and creating such a browser pool will cause an error to be thrown (it's expected that all plugins that will be used are of the same type).
One small feature worth mentioning is the ability to handle requests with browser crawlers outside the browser. To do that, we can use a combination of Request.skipNavigation and context.sendRequest().
Take a look at how to achieve this by checking out the Skipping navigation for certain requests example!
Crawlee exports the default log instance directly as a named export. We also have a scoped log instance provided in the crawling context - this one will log messages prefixed with the crawler name and should be preferred for logging inside the request handler.
const crawler = new CheerioCrawler({
async requestHandler({ log, request }) {
log.info(`Opened ${request.loadedUrl}`);
},
});Every crawler instance now has useState() method that will return a state object we can use. It will be automatically saved when persistState event occurs. The value is cached, so we can freely call this method multiple times and get the exact same reference. No need to worry about saving the value either, as it will happen automatically.
const crawler = new CheerioCrawler({
async requestHandler({ crawler }) {
const state = await crawler.useState({ foo: [] as number[] });
// just change the value, no need to care about saving it
state.foo.push(123);
},
});The Apify platform helpers can be now found in the Apify SDK (apify NPM package). It exports the Actor class that offers following static helpers:
ApifyClient shortcuts: addWebhook(), call(), callTask(), metamorph()init(), exit(), fail(), main(), isAtHome(), createProxyConfiguration()getInput(), getValue(), openDataset(), openKeyValueStore(), openRequestQueue(), pushData(), setValue()on(), off()getEnv(), newClient(), reboot()Actor.main is now just a syntax sugar around calling Actor.init() at the beginning and Actor.exit() at the end (plus wrapping the user function in try/catch block). All those methods are async and should be awaited - with node 16 we can use the top level await for that. In other words, following is equivalent:
import { Actor } from 'apify';
await Actor.init();
// your code
await Actor.exit('Crawling finished!');import { Actor } from 'apify';
await Actor.main(async () => {
// your code
}, { statusMessage: 'Crawling finished!' });Actor.init() will conditionally set the storage implementation of Crawlee to the ApifyClient when running on the Apify platform, or keep the default (memory storage) implementation otherwise. It will also subscribe to the websocket events (or mimic them locally). Actor.exit() will handle the tear down and calls process.exit() to ensure our process won't hang indefinitely for some reason.
Apify SDK (v2) exports Apify.events, which is an EventEmitter instance. With Crawlee, the events are managed by EventManager class instead. We can either access it via Actor.eventManager getter, or use Actor.on and Actor.off shortcuts instead.
-Apify.events.on(...);
+Actor.on(...);We can also get the
EventManagerinstance viaConfiguration.getEventManager().
In addition to the existing events, we now have an exit event fired when calling Actor.exit() (which is called at the end of Actor.main()). This event allows you to gracefully shut down any resources when Actor.exit is called.
Apify.call() is now just a shortcut for running ApifyClient.actor(actorId).call(input, options), while also taking the token inside env vars into accountApify.callTask() is now just a shortcut for running ApifyClient.task(taskId).call(input, options), while also taking the token inside env vars into accountApify.metamorph() is now just a shortcut for running ApifyClient.task(taskId).metamorph(input, options), while also taking the ACTOR_RUN_ID inside env vars into accountApify.waitForRunToFinish() has been removed, use ApifyClient.waitForFinish() insteadActor.main/init purges the storage by defaultpurgeLocalStorage helper, move purging to the storage class directlyStorageClient interface now has optional purge methodActor.init() (you can opt out via purge: false in the options of init/main methods)QueueOperationInfo.request is no longer availableRequest.handledAt is now string date in ISO formatRequest.inProgress and Request.reclaimed are now Sets instead of POJOsinjectUnderscore from puppeteer utils has been removedAPIFY_MEMORY_MBYTES is no longer taken into account, use CRAWLEE_AVAILABLE_MEMORY_RATIO insteadAutoscaledPool options are no longer available:cpuSnapshotIntervalSecs and memorySnapshotIntervalSecs has been replaced with top level systemInfoIntervalMillis configurationmaxUsedCpuRatio has been moved to the top level configurationProxyConfiguration.newUrlFunction can be async. .newUrl() and .newProxyInfo() now return promises.prepareRequestFunction and postResponseFunction options are removed, use navigation hooks insteadgotoFunction and gotoTimeoutSecs are removedRequest propsfingerprintsOptions renamed to fingerprintOptions (fingerprints -> fingerprint).fingerprintOptions now accept useFingerprintCache and fingerprintCacheSize (instead of useFingerprintPerProxyCache and fingerprintPerProxyCacheSize, which are now no longer available). This is because the cached fingerprints are no longer connected to proxy URLs but to sessions.utils.apifyClient early instantiation (#1330)utils.playwright.injectJQuery() (#1337)keyValueStore option to Statistics class (#1345)RequestList (#1347)This should help with the
We either navigate top level or have old version of the navigated framebug in puppeteer.
RequestTransform's return typeenqueueLinksByClickingElements (#1295)playwright to v1.20.2puppeteer to v13.5.2We noticed that with this version of puppeteer actor run could crash with
We either navigate top level or have old version of the navigated frameerror (puppeteer issue here). It should not happen while running the browser in headless mode. In case you need to run the browser in headful mode (headless: false), we recommend pinning puppeteer version to10.4.0in actorpackage.jsonfile.
RequestQueue state after 5 minutes of inactivity, closes #997This release should resolve the 0 concurrency bug by automatically resetting the
internal RequestQueue state after 5 minutes of inactivity.
We now track last activity done on a RequestQueue instance:
inProgress cache)If we don't detect one of those actions in last 5 minutes, and we have some
requests in the inProgress cache, we try to reset the state. We can override
this limit via CRAWLEE_INTERNAL_TIMEOUT env var.
This should finally resolve the 0 concurrency bug, as it was always about
stuck requests in the inProgress cache.
request.headers is setRequestQueue API timeout to 30 secondstryCancel() from inside sync callback (#1265)body is not available in infiniteScroll() from Puppeteer utils (#1238)utils.log instance (#1278)Up until now, browser crawlers used the same session (and therefore the same proxy) for
all request from a single browser * now get a new proxy for each session. This means
that with incognito pages, each page will get a new proxy, aligning the behaviour with
CheerioCrawler.
This feature is not enabled by default. To use it, we need to enable useIncognitoPages
flag under launchContext:
new Apify.Playwright({
launchContext: {
useIncognitoPages: true,
},
// ...
})Note that currently there is a performance overhead for using
useIncognitoPages. Use this flag at your own will.
We are planning to enable this feature by default in SDK v3.0.
Previously when a page function timed out, the task still kept running. This could lead to requests being processed multiple times. In v2.2 we now have abortable timeouts that will cancel the task as early as possible.
Several new timeouts were added to the task function, which should help mitigate the zero concurrency bug. Namely fetching of next request information and reclaiming failed requests back to the queue
are now executed with a timeout with 3 additional retries before the task fails. The timeout is always at least 300s (5 minutes), or requestHandlerTimeoutSecs if that value is higher.
RequestError: URI malformed in cheerio crawler (#1205)diffCookie (#1217)runTaskFunction() (#1250)window.scrollBy (#1170)YOUTUBE_REGEX_STRING being too greedy (#1171)purgeLocalStorage utility method (#1187)fixUrl function (#1184)requestsFromUrl to RequestListOptions in TS (#1191)forceCloud down to the KV store (#1186), closes #752ApifyClient v2 (full rewrite to TS)APIFY_LOCAL_STORAGE_ENABLE_WAL_MODE), closes #956@ts-ignore comments to imports of optional peer dependencies (#1152)sdk.openSessionPool() (#1154)infiniteScroll (#1140)ProxyConfiguration and CheerioCrawler.got-scraping to receive multiple improvements.sessionToken with got-scrapingforceUrlEncoding in requestAsBrowser because we found out that recent versions of the underlying HTTP client got already encode URLs
and forceUrlEncoding could lead to weird behavior. We think of this as fixing a bug, so we're not bumping the major version.handleRequestTimeoutMillis to max valid value to prevent Node.js fallback to 1.got-scraping@^3.0.1handleRequestTimeoutMillis to max valid valueCheerioCrawler caused by parser conflicts in recent versions of cheerio.got-scraping 2.0.1 until fully compatible.cheerio to 1.0.0-rc.10 from rc.3. There were breaking changes in cheerio between the versions so this bump might be breaking for you as well.LiveViewServer which was deprecated before release of SDK v1.