[Java, MongoDB, Программирование] The Testcontainers’ MongoDB Module and Spring Data MongoDB in Action
Автор
Сообщение
news_bot ®
Стаж: 6 лет 9 месяцев
Сообщений: 27286
1. Introduction
How can I easily test my MongoDB multi-document transaction code without setting up MongoDB on my device? One might argue that they have to set it up first because in order to carry out such a transaction it needs a session which requires a replica set. Thankfully, there is no need to create a 3-node replica set and we can run these transactions only against a single database instance.
To achieve this, we may do the following:
- Run a MongoDB container of version 4 or higher and specify a --replSet command;
- Initialize a single replica set by executing a proper command;
- Wait for the initialization to complete;
- Connect to a standalone without specifying a replica set in order not to worry about modifying our OS host file.
It is worth mentioning that a replica set is not the only option here because MongoDB version 4.2 introduces distributed transactions in sharded clusters, which is beyond the scope of this article.
There are a lot of ways of how to initialize a replica set, including Docker compose, bash scripts, services in a CI/CD etc. However, it takes some extra work in terms of scripting, handling random ports, and making it part of the CI/CD process. Fortunately, starting from Testcontainers’ version 1.14.2 we are able to delegate all the heavy lifting to the MongoDB Module.
Let us try it out on a small warehouse management system based on Spring Boot 2.3. In the recent past one had to use ReactiveMongoOperations and its inTransaction method, but since Spring Data MongoDB 2.2 M4 we have been able to leverage the good old @Transactional annotation or more advanced TransactionalOperator.
Our application should have a REST API to provide the information on successfully processed files including the number of the documents modified. Regarding the files causing errors along the way, we should skip them to process all the files.
It may be noted that even though duplicated articles and their sizes within a single file are a rare case, this possibility is quite realistic, and therefore should be handled as well.
As per business requirements to our system, we already have some products in our database and we upload a bunch of Excel (xlsx) files to update some fields of the matched documents in our storage. Data is supposed to be only at the first sheet of any workbook. Each file is processed in a separate multi-document transaction to prevent simultaneous modifications of the same documents. For example, Figure 1 shows collision cases on how a transaction ends up except for a possible scenario when transactions are executed sequentially (json representation is shortened here for the sake of simplicity). Transactional behavior helps us to avoid clashing the data and guarantees consistency.
Figure 1 Transaction sequence diagram: collision cases
As for a product collection, we have an article as a unique index. At the same time, each article is bound to a concrete size. Therefore, it is important for our application to verify that both of them are in the database before updating. Figure 2 gives an insight into this collection.
Figure 2 Product collection details
2. Business logic implementation
Let us elaborate on the major points of the above-mentioned business logic and start with ProductController as an entry point for the processing. You can find a complete project on GitHub. Prerequisites are Java8+ and Docker.
@PatchMapping(
consumes = MediaType.MULTIPART_FORM_DATA_VALUE,
produces = MediaType.APPLICATION_STREAM_JSON_VALUE
)
public ResponseEntity<Flux<FileUploadDto>> patchProductQuantity(
@RequestPart("file") Flux<FilePart> files,
@AuthenticationPrincipal Principal principal
) {
log.debug("shouldPatchProductQuantity");
return ResponseEntity.accepted().body(
uploadProductService.patchProductQuantity(files, principal.getName())
);
}
1) Wrap a response in a ResponseEntity and return the flux of the FileUploadDto;
2) Get a current authentication principal, coming in handy later on;
3) Pass the flux of the FilePart to process.
Here is the patchProductQuantity method of the UploadProductServiceImpl:
public Flux<FileUploadDto> patchProductQuantity(
final Flux<FilePart> files,
final String userName
) {
return Mono.fromRunnable(() -> initRootDirectory(userName))
.publishOn(Schedulers.newBoundedElastic(1, 1, "initRootDirectory"))
.log(String.format("cleaning-up directory: %s", userName))
.thenMany(files.flatMap(f ->
saveFileToDiskAndUpdate(f, userName)
.subscribeOn(Schedulers.boundedElastic())
)
);
}
1) Use the name of the user as the root directory name;
2) Do the blocking initialization of the root directory on a separate elastic thread;
3) For each Excel file:
3.1) Save it on a disk;
3.2) Then update the quantity of the products on a separate elastic thread, as blocking processing of the file is ran.
The saveFileToDiskAndUpdate method does the following logic:
private Mono<FileUploadDto> saveFileToDiskAndUpdate(
final FilePart file,
final String userName
) {
final String fileName = file.filename();
final Path path = Paths.get(pathToStorage, userName, fileName);
return Mono.just(path)
.log(String.format("A file: %s has been uploaded", fileName))
.flatMap(file::transferTo)
.log(String.format("A file: %s has been saved", fileName))
.then(processExcelFile(fileName, userName, path));
}
- Copy the content of the file to the user’s directory;
- After the copy stage is completed, call the processExcelFile method.
At this point, we are going to divide logic in accordance with the size of the file:
private Mono<FileUploadDto> processExcelFile(
final String fileName,
final String userName,
final Path path
) {
return Mono.fromCallable(() -> Files.size(path))
.flatMap(size -> {
if (size >= bigFileSizeThreshold) {
return processBigExcelFile(fileName, userName);
} else {
return processSmallExcelFile(fileName, userName);
}
});
}
- Wrap the blocking Files.size(path) call in Mono.fromCallable;
- bigFileSizeThreshold is injected from a proper application.yml file via @Value("${upload-file.bigFileSizeThreshold}").
Before going into detail on processing Excel files depending on their size, we should take a look at the getProducts method of the ExcelFileDaoImpl:
@Override
public Flux<Product> getProducts(
final String pathToStorage,
final String fileName,
final String userName
) {
return Flux.defer(() -> {
FileInputStream is;
Workbook workbook;
try {
final File file = Paths.get(pathToStorage, userName, fileName).toFile();
verifyFileAttributes(file);
is = new FileInputStream(file);
workbook = StreamingReader.builder()
.rowCacheSize(ROW_CACHE_SIZE)
.bufferSize(BUFFER_SIZE)
.open(is);
} catch (IOException e) {
return Mono.error(new UploadProductException(
String.format("An exception has been occurred while parsing a file: %s " +
"has been saved", fileName), e));
}
try {
final Sheet datatypeSheet = workbook.getSheetAt(0);
final Iterator<Row> iterator = datatypeSheet.iterator();
final AtomicInteger rowCounter = new AtomicInteger();
if (iterator.hasNext()) {
final Row currentRow = iterator.next();
rowCounter.incrementAndGet();
verifyExcelFileHeader(fileName, currentRow);
}
return Flux.<Product>create(fluxSink -> fluxSink.onRequest(value -> {
try {
for (int i = 0; i < value; i++) {
if (!iterator.hasNext()) {
fluxSink.complete();
return;
}
final Row currentRow = iterator.next();
final Product product = Objects.requireNonNull(getProduct(
FileRow.builder()
.fileName(fileName)
.currentRow(currentRow)
.rowCounter(rowCounter.incrementAndGet())
.build()
), "product is not supposed to be null");
fluxSink.next(product);
}
} catch (Exception e1) {
fluxSink.error(e1);
}
})).doFinally(signalType -> {
try {
is.close();
workbook.close();
} catch (IOException e1) {
log.error("Error has occurred while releasing {} resources: {}", fileName, e1);
}
});
} catch (Exception e) {
return Mono.error(e);
}
});
}
- differ the whole logic once there is a new subscriber;
- Verify the excel file header;
- Create a flux to provide the requested number of products;
- Convert an Excel row into a Product domain object;
- Finally, close all of the opened resources.
Getting back to the processing of the Excel files in the UploadProductServiceImpl, we are going to use the MongoDB’s bulkWrite method on a collection to update products in bulk, which requires the eagerly evaluated list of the UpdateOneModel. In practice, collecting such a list is a memory-consuming operation, especially for big files.
Regarding small Excel files, we provide a more detailed log and do additional validation check:
private Mono<FileUploadDto> processSmallExcelFile(
final String fileName,
final String userName
) {
log.debug("processSmallExcelFile: {}", fileName);
return excelFileDao.getProducts(pathToStorage, fileName, userName)
.reduce(new ConcurrentHashMap<ProductArticleSizeDto, Tuple2<UpdateOneModel<Document>, BigInteger>>(),
(indexMap, product) -> {
final BigInteger quantity = product.getQuantity();
indexMap.merge(
new ProductArticleSizeDto(product.getArticle(), product.getSize()),
Tuples.of(
updateOneModelConverter.convert(Tuples.of(product, quantity, userName)),
quantity
),
(oldValue, newValue) -> {
final BigInteger mergedQuantity = oldValue.getT2().add(newValue.getT2());
return Tuples.of(
updateOneModelConverter.convert(Tuples.of(product, mergedQuantity, userName)),
mergedQuantity
);
}
);
return indexMap;
})
.filterWhen(productIndexFile ->
productDao.findByArticleIn(extractArticles(productIndexFile.keySet()))
.<ProductArticleSizeDto>handle(
(productArticleSizeDto, synchronousSink) -> {
if (productIndexFile.containsKey(productArticleSizeDto)) {
synchronousSink.next(productArticleSizeDto);
} else {
synchronousSink.error(new UploadProductException(
String.format(
"A file %s does not have an article: %d with size: %s",
fileName,
productArticleSizeDto.getArticle(),
productArticleSizeDto.getSize()
)
));
}
})
.count()
.handle((sizeDb, synchronousSink) -> {
final int sizeFile = productIndexFile.size();
if (sizeDb == sizeFile) {
synchronousSink.next(Boolean.TRUE);
} else {
synchronousSink.error(new UploadProductException(
String.format(
"Inconsistency between total element size in MongoDB: %d and a file %s: %d",
sizeDb,
fileName,
sizeFile
)
));
}
})
).onErrorResume(e -> {
log.debug("Exception while processExcelFile fileName: {}: {}", fileName, e);
return Mono.empty();
}).flatMap(productIndexFile ->
productPatcherService.incrementProductQuantity(
fileName,
productIndexFile.values().stream().map(Tuple2::getT1).collect(Collectors.toList()),
userName
)
).map(bulkWriteResult -> FileUploadDto.builder()
.fileName(fileName)
.matchedCount(bulkWriteResult.getMatchedCount())
.modifiedCount(bulkWriteResult.getModifiedCount())
.build()
);
}
- reduce helps us handle duplicate products whose quantities should be summed up;
- Collect a map to get the list of the ProductArticleSizeDto against the pair of the list of the UpdateOneModel and the total quantity for a product. The former is in use for matching an article and its size in the file with those that are in the database via a projection ProductArticleSizeDto;
- Use the atomic merge method of the ConcurrentMap to sum up the quantity of the same products and create a new UpdateOneModel;
- Filter out all products in the file by those product’s articles that are in the database;
- Each ProductArticleSizeDto found in the storage matches a ProductArticleSizeDto from the file summed up by quantity;
- Then count the result after filtration which should be equal to the distinct number of products in the file;
- Use the onErrorResume method to continue when any error occurs because we need to process all files as mentioned in the requirements;
- Extract the list of the UpdateOneModel from the map collected earlier to be further used in the incrementProductQuantity method;
- Then run the incrementProductQuantity method as a sub-process within flatMap and map its result in FileUploadDto that our business users are in need of.
Even though the filterWhen and the subsequent productDao.findByArticleIn allow us to do some additional validation at an early stage, they come at a price, which is especially noticeable while processing big files in practice. However, the incrementProductQuantity method can compare the number of modified documents and match them against the number of the distinct products in the file. Knowing that, we can implement a more light-weight option to process big files:
private Mono<FileUploadDto> processBigExcelFile(
final String fileName,
final String userName
) {
log.debug("processBigExcelFile: {}", fileName);
return excelFileDao.getProducts(pathToStorage, fileName, userName)
.reduce(new ConcurrentHashMap<Product, Tuple2<UpdateOneModel<Document>, BigInteger>>(),
(indexMap, product) -> {
final BigInteger quantity = product.getQuantity();
indexMap.merge(
product,
Tuples.of(
updateOneModelConverter.convert(Tuples.of(product, quantity, userName)),
quantity
),
(oldValue, newValue) -> {
final BigInteger mergedQuantity = oldValue.getT2().add(newValue.getT2());
return Tuples.of(
updateOneModelConverter.convert(Tuples.of(product, mergedQuantity, userName)),
mergedQuantity
);
}
);
return indexMap;
})
.map(indexMap -> indexMap.values().stream().map(Tuple2::getT1).collect(Collectors.toList()))
.onErrorResume(e -> {
log.debug("Exception while processExcelFile: {}: {}", fileName, e);
return Mono.empty();
}).flatMap(dtoList ->
productPatcherService.incrementProductQuantity(
fileName,
dtoList,
userName
)
).map(bulkWriteResult -> FileUploadDto.builder()
.fileName(fileName)
.matchedCount(bulkWriteResult.getMatchedCount())
.modifiedCount(bulkWriteResult.getModifiedCount())
.build()
);
}
Here is the ProductAndUserNameToUpdateOneModelConverter that we have used to create an UpdateOneModel:
@Component
public class ProductAndUserNameToUpdateOneModelConverter implements
Converter<Tuple3<Product, BigInteger, String>, UpdateOneModel<Document>> {
@Override
@NonNull
public UpdateOneModel<Document> convert(@NonNull Tuple3<Product, BigInteger, String> source) {
Objects.requireNonNull(source);
final Product product = source.getT1();
final BigInteger quantity = source.getT2();
final String userName = source.getT3();
return new UpdateOneModel<>(
Filters.and(
Filters.eq(Product.SIZE_DB_FIELD, product.getSize().name()),
Filters.eq(Product.ARTICLE_DB_FIELD, product.getArticle())
),
Document.parse(
String.format(
"{ $inc: { %s: %d } }",
Product.QUANTITY_DB_FIELD,
quantity
)
).append(
"$set",
new Document(
Product.LAST_MODIFIED_BY_DB_FIELD,
userName
)
),
new UpdateOptions().upsert(false)
);
}
}
- Firstly, find a document by article and size. Figure 2 shows that we have a compound index on the size and article fields of the product collection to facilitate such a search;
- Increment the quantity of the found document and set the name of the user in the lastModifiedBy field;
- It is also possible to upsert a document here, but we are interested only in the modification of the existing documents in the storage.
Now we are ready to implement the central part of our processing which is the incrementProductQuantity method of the ProductPatcherDaoImpl:
@Override
public Mono<BulkWriteResult> incrementProductQuantity(
final String fileName,
final List<UpdateOneModel<Document>> models,
final String userName
) {
return transactionalOperator.execute(
action -> reactiveMongoOperations.getCollection(Product.COLLECTION_NAME)
.flatMap(collection ->
Mono.from(collection.bulkWrite(models, new BulkWriteOptions().ordered(true)))
).<BulkWriteResult>handle((bulkWriteResult, synchronousSink) -> {
final int fileCount = models.size();
if (Objects.equals(bulkWriteResult.getModifiedCount(), fileCount)) {
synchronousSink.next(bulkWriteResult);
} else {
synchronousSink.error(
new IllegalStateException(
String.format(
"Inconsistency between modified doc count: %d and file doc count: %d. Please, check file: %s",
bulkWriteResult.getModifiedCount(), fileCount, fileName
)
)
);
}
}).onErrorResume(
e -> Mono.fromRunnable(action::setRollbackOnly)
.log("Exception while incrementProductQuantity: " + fileName + ": " + e)
.then(Mono.empty())
)
).singleOrEmpty();
}
- Use a transactionalOperator to roll back a transaction manually. As has been mentioned before, our goal is to process all files while skipping those causing exceptions;
- Run a single sub-process to bulk write modifications to the database sequentially for fail-fast and less resource-intensive behavior. The word "single" is of paramount importance here because we avoid the dangerous "N+1 Query Problem" leading to spawning a lot of sub-processes on a flux within flatMap;
- Handle the situation when the number of the documents processed does not match the one coming from the distinct number of the products in the file;
- The onErrorResume method handles the rollback of the transaction and then returns Mono.empty() to skip the current processing;
- Expect either a single item or an empty Mono as the result of the transactionalOperator.execute method.
One would say: "You called collection.bulkWrite(models, new BulkWriteOptions().ordered(true)), what about setting a session?". The thing is that the SessionAwareMethodInterceptor of the Spring Data MongoDB does it via reflection:
ReflectionUtils.invokeMethod(targetMethod.get(), target,
prependSessionToArguments(session, methodInvocation)
Here is the prependSessionToArguments method:
private static Object[] prependSessionToArguments(ClientSession session, MethodInvocation invocation) {
Object[] args = new Object[invocation.getArguments().length + 1];
args[0] = session;
System.arraycopy(invocation.getArguments(), 0, args, 1, invocation.getArguments().length);
return args;
}
1) Get the arguments of the MethodInvocation;
2) Add session as a the first element in the args array.
In fact, the following method of the MongoCollectionImpl is called:
@Override
public Publisher<BulkWriteResult> bulkWrite(final ClientSession clientSession,
final List<? extends WriteModel<? extends TDocument>> requests,
final BulkWriteOptions options) {
return Publishers.publish(
callback -> wrapped.bulkWrite(clientSession.getWrapped(), requests, options, callback));
}
3. Test implementation
So far so good, we can create integration tests to cover our logic.
To begin with, we create ProductControllerITTest to test our public API via the Spring’s WebTestClient and initialize a MongoDB instance to run tests against:
private static final MongoDBContainer MONGO_DB_CONTAINER =
new MongoDBContainer("mongo:4.2.8");
1) Use a static field to have single Testcontainers’ MongoDBContainer per all test methods in ProductControllerITTest;
2) We use 4.2.8 MongoDB container version from Docker Hub as it is the latest stable one, otherwise MongoDBContainer defaults to 4.0.10.
Then in static methods setUpAll and tearDownAll we start and stop the MongoDBContainer respectively:
@BeforeAll
static void setUpAll() {
MONGO_DB_CONTAINER.start();
}
@AfterAll
static void tearDownAll() {
MONGO_DB_CONTAINER.stop();
}
tearDown is responsible for cleaning up the modifications that each test does:
@AfterEach
void tearDown() {
StepVerifier.create(productDao.deleteAll()).verifyComplete();
}
Next we set spring.data.mongodb.uri by executing MONGO_DB_CONTAINER.getReplicaSetUrl() in ApplicationContextInitializer:
static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
@Override
public void initialize(@NotNull ConfigurableApplicationContext configurableApplicationContext) {
TestPropertyValues.of(
String.format("spring.data.mongodb.uri: %s", MONGO_DB_CONTAINER.getReplicaSetUrl())
).applyTo(configurableApplicationContext);
}
}
Now we are ready to write a first test without any transaction collision, because our test files (see Figure 3) have products whose articles do not clash with one another.
Figure 3 Excel files causing no collision in the articles of the products
@WithMockUser(
username = SecurityConfig.ADMIN_NAME,
password = SecurityConfig.ADMIN_PAS,
authorities = SecurityConfig.WRITE_PRIVILEGE
)
@Test
void shouldPatchProductQuantity() {
//GIVEN
insertMockProductsIntoDb(Flux.just(product1, product2, product3));
final BigInteger expected1 = BigInteger.valueOf(16);
final BigInteger expected2 = BigInteger.valueOf(27);
final BigInteger expected3 = BigInteger.valueOf(88);
final String fileName1 = "products1.xlsx";
final String fileName3 = "products3.xlsx";
final String[] fileNames = {fileName1, fileName3};
final FileUploadDto fileUploadDto1 = ProductTestUtil.mockFileUploadDto(fileName1, 2);
final FileUploadDto fileUploadDto3 = ProductTestUtil.mockFileUploadDto(fileName3, 1);
//WHEN
final WebTestClient.ResponseSpec exchange = webClient
.patch()
.uri(BASE_URL)
.contentType(MediaType.MULTIPART_FORM_DATA)
.body(BodyInserters.fromMultipartData(ProductTestUtil.getMultiPartFormData(fileNames)))
.exchange();
//THEN
exchange.expectStatus().isAccepted();
exchange.expectBodyList(FileUploadDto.class)
.hasSize(2)
.contains(fileUploadDto1, fileUploadDto3);
StepVerifier.create(productDao.findAllByOrderByQuantityAsc())
.assertNext(product -> assertEquals(expected1, product.getQuantity()))
.assertNext(product -> assertEquals(expected2, product.getQuantity()))
.assertNext(product -> assertEquals(expected3, product.getQuantity()))
.verifyComplete();
}
Finally, let us test a transaction collision in action, keeping in mind Figure 1 and Figure 4 showing such files:
Figure 4 Excel files causing a collision in the articles of the products
@WithMockUser(
username = SecurityConfig.ADMIN_NAME,
password = SecurityConfig.ADMIN_PAS,
authorities = SecurityConfig.WRITE_PRIVILEGE
)
@Test
void shouldPatchProductQuantityConcurrently() {
//GIVEN
TransactionUtil.setMaxTransactionLockRequestTimeoutMillis(
20,
MONGO_DB_CONTAINER.getReplicaSetUrl()
);
insertMockProductsIntoDb(Flux.just(product1, product2));
final String fileName1 = "products1.xlsx";
final String fileName2 = "products2.xlsx";
final String[] fileNames = {fileName1, fileName2};
final BigInteger expected120589Sum = BigInteger.valueOf(19);
final BigInteger expected120590Sum = BigInteger.valueOf(32);
final BigInteger expected120589T1 = BigInteger.valueOf(16);
final BigInteger expected120589T2 = BigInteger.valueOf(12);
final BigInteger expected120590T1 = BigInteger.valueOf(27);
final BigInteger expected120590T2 = BigInteger.valueOf(11);
final FileUploadDto fileUploadDto1 = ProductTestUtil.mockFileUploadDto(fileName1, 2);
final FileUploadDto fileUploadDto2 = ProductTestUtil.mockFileUploadDto(fileName2, 2);
//WHEN
final WebTestClient.ResponseSpec exchange = webClient
.patch()
.uri(BASE_URL)
.contentType(MediaType.MULTIPART_FORM_DATA)
.accept(MediaType.APPLICATION_STREAM_JSON)
.body(BodyInserters.fromMultipartData(ProductTestUtil.getMultiPartFormData(fileNames)))
.exchange();
//THEN
exchange.expectStatus().isAccepted();
assertThat(
extractBodyArray(exchange),
either(arrayContaining(fileUploadDto1))
.or(arrayContaining(fileUploadDto2))
.or(arrayContainingInAnyOrder(fileUploadDto1, fileUploadDto2))
);
final List<Product> list = productDao.findAll(Sort.by(Sort.Direction.ASC, "article"))
.toStream().collect(Collectors.toList());
assertThat(list.size(), is(2));
assertThat(
list.stream().map(Product::getQuantity).toArray(BigInteger[]::new),
either(arrayContaining(expected120589T1, expected120590T1))
.or(arrayContaining(expected120589T2, expected120590T2))
.or(arrayContaining(expected120589Sum, expected120590Sum))
);
TransactionUtil.setMaxTransactionLockRequestTimeoutMillis(
5,
MONGO_DB_CONTAINER.getReplicaSetUrl()
);
}
- We can specify the maximum amount of time in milliseconds that multi-document transactions should wait to acquire locks required by the operations in the transaction (by default, multi-document transactions wait 5 milliseconds);
- As an example here, we might use a helper method to change 5ms with 20ms (see an implementation details below).
Note that the maxTransactionLockRequestTimeoutMillis setting has no sense for this particular test case and serves the purpose of the example. After running this test class 120 times via a script ./load_test.sh 120 ProductControllerITTest.shouldPatchProductQuantityConcurrently in the tools directory of the project, I got the following figures:
indicator
20ms,
times
5ms(default),
times
T1 successes
61
56
T2 successes
57
63
T1 and T2 success
2
1
Figure 5 Running the shouldPatchProductQuantityConcurrently test 120 times with 20 and 5 ms maxTransactionLockRequestTimeoutMillis respectively
While going through logs, we may come across something like:
Exception while incrementProductQuantity: products1.xlsx: com.mongodb.MongoCommandException: Command failed with error 112 (WriteConflict): 'WriteConflict' on server…
Initiating transaction rollback…
Initiating transaction commit…
About to abort transaction for session…
About to commit transaction for session...
Then, let us test the processing of the big file containing 1 million products in a separate PatchProductLoadITTest:
@WithMockUser(
username = SecurityConfig.ADMIN_NAME,
password = SecurityConfig.ADMIN_PAS,
authorities = SecurityConfig.WRITE_PRIVILEGE
)
@Test
void shouldPatchProductQuantityBigFile() {
//GIVEN
unzipClassPathFile("products_1M.zip");
final String fileName = "products_1M.xlsx";
final int count = 1000000;
final long totalQuantity = 500472368779L;
final List<Document> products = getDocuments(count);
TransactionUtil.setTransactionLifetimeLimitSeconds(
900,
MONGO_DB_CONTAINER.getReplicaSetUrl()
);
StepVerifier.create(
reactiveMongoTemplate.remove(new Query(), Product.COLLECTION_NAME)
.then(reactiveMongoTemplate.getCollection(Product.COLLECTION_NAME))
.flatMapMany(c -> c.insertMany(products))
.switchIfEmpty(Mono.error(new RuntimeException("Cannot insertMany")))
.then(getTotalQuantity())
).assertNext(t -> assertEquals(totalQuantity, t)).verifyComplete();
//WHEN
final Instant start = Instant.now();
final WebTestClient.ResponseSpec exchange = webClient
.patch()
.uri(BASE_URL)
.contentType(MediaType.MULTIPART_FORM_DATA)
.accept(MediaType.APPLICATION_STREAM_JSON)
.body(BodyInserters.fromMultipartData(ProductTestUtil.getMultiPartFormData("products_1M.xlsx")))
.exchange();
//THEN
exchange
.expectStatus()
.isAccepted()
.expectBodyList(FileUploadDto.class)
.contains(ProductTestUtil.mockFileUploadDto(fileName, count));
StepVerifier.create(getTotalQuantity())
.assertNext(t -> assertEquals(totalQuantity * 2, t))
.verifyComplete();
log.debug(
"============= shouldPatchProductQuantityBigFile elapsed {}minutes =============",
Duration.between(start, Instant.now()).toMinutes()
);
}
- The general setup is similar to the ProductControllerITTest;
- Unzip a json file containing 1 million products which requires about 254M on a disk;
- Transactions have a lifetime limit as specified by transactionLifetimeLimitSeconds which is 60 seconds by default. We need to increase it here, because generally it takes more than 60 s to process such a file. For this, we use a helper method to change this lifespan to 900 s (see the implementation details below). For your information, the REST call with the file takes GitHub Actions about 9-12 minutes;
- Before processing, we clean up a product collection, insert 1 million products from the json file and then get the total of the quantity;
- Given the products in the json file and the big excel file are equal, we assert that the total quantity of the product after processing should double.
Such a test requires a relatively big heap of about 4GB (see Figure 6):
Figure 6 VisualVM Monitor Heap while uploading a 1-million-product file
As we can see, it is sensible to configure the maximum amount of disk space allowed for file parts and the maximum number of parts allowed in a given multipart request. Which is why I added properties to a proper application.yml file and then set them in the configureHttpMessageCodecs method of the implemented WebFluxConfigurer. However, adding Rate Limiter and configuring Schedulers might be a better solution in production environment. Note that we use Schedulers.boundedElastic() here having a pool of 10 * Runtime.getRuntime().availableProcessors() threads by default.
Here is TransactionUtilcontaining the above-mentioned helper methods:
public class TransactionUtil {
private TransactionUtil() {
}
public static void setTransactionLifetimeLimitSeconds(
final int duration,
final String replicaSetUrl
) {
setMongoParameter("transactionLifetimeLimitSeconds", duration, replicaSetUrl);
}
public static void setMaxTransactionLockRequestTimeoutMillis(
final int duration,
final String replicaSetUrl
) {
setMongoParameter("maxTransactionLockRequestTimeoutMillis", duration, replicaSetUrl);
}
private static void setMongoParameter(
final String param,
final int duration,
final String replicaSetUrl
) {
try (final MongoClient mongoReactiveClient = MongoClients.create(
ConnectionUtil.getMongoClientSettingsWithTimeout(replicaSetUrl)
)) {
StepVerifier.create(mongoReactiveClient.getDatabase("admin").runCommand(
new Document("setParameter", 1).append(param, duration)
)).expectNextCount(1)
.verifyComplete();
}
}
}
4. How can I play with the code?
Small WMS (warehouse management system) on GitHub.
5. What’s in it for me?
- The MongoDBContainer takes care of the complexity in the MongoDB replica set initialization allowing the developer to focus on testing. Now we can simply make MongoDB transaction testing part of our CI/CD process;
- While processing data, it is sensible to favor MongoDB’s bulk methods, reducing the number of sub-processes within the flatMap method of the Flux and thus to avoid introducing the "N+1 Query problem". However, it also comes at a price because here we need to collect a list of UpdateOneModel and keep it in memory lacking reactive flexibility;
- When it comes to skipping processing, one might employ onErrorResume instead of the dangerous onErrorContinue
- Even though are we allowed to set maxTransactionLockRequestTimeoutMillis and transactionLifetimeLimitSeconds as parameters during start-up to mongod, we may achieve the effect by calling the MongoDB's adminCommand via helper methods;
- Processing big files is resource-consuming and thus better be limited.
6. Want to go deeper?
To construct a multi-node MongoDB replica set for testing complicated failover cases, consider the mongodb-replica-set project.
7. Links
- Reactive Transactions Masterclass by Michael Simons & Mark Paluch
- Spring Data MongoDB — Reference Documentation
- MongoDB Transactions
- MongoDB Collection Methods
===========
Источник:
habr.com
===========
Похожие новости:
- [JavaScript, ReactJS, Тестирование веб-сервисов] Модульное и интеграционное тестирование в Redux Saga на примерах
- [Системное программирование, Программирование микроконтроллеров, Компьютерное железо] Моделируем поведение Quartus-проекта на Verilog в среде ModelSim
- [Программирование, Сетевые технологии, IT-стандарты] Закон дырявых абстракций (перевод)
- [API, JavaScript, Node.JS, Социальные сети и сообщества] Бот «Умный планировщик»: понимает с полуслова
- [Java, Разработка игр] Игра «Быки и коровы». Часть 1
- [Разработка веб-сайтов, Python, Программирование, Функциональное программирование] Какая асинхронность должна была бы быть в Python
- [API, Java, Node.JS] Работаем с NPM реестром из Java (перевод)
- [Программирование микроконтроллеров, Разработка робототехники, Робототехника, Системное программирование, Транспорт] Издатель/подписчик для распределённых отказоустойчивых бортовых систем реального времени в 1500 строк кода
- [Microsoft SQL Server, MySQL, PostgreSQL, Администрирование баз данных] DataGrip 2020.2: редактор больших значений, предпросмотр SQL при редактировании, новое отображение ячеек bool и другое
- [DIY или Сделай сам, Лайфхаки для гиков, Офисы IT-компаний, Программирование] Управляем офисом с помощью простого telegram-бота
Теги для поиска: #_java, #_mongodb, #_programmirovanie (Программирование), #_java, #_mongodb, #_testcontainers, #_spring_data, #_transactions, #_java, #_mongodb, #_programmirovanie (
Программирование
)
Вы не можете начинать темы
Вы не можете отвечать на сообщения
Вы не можете редактировать свои сообщения
Вы не можете удалять свои сообщения
Вы не можете голосовать в опросах
Вы не можете прикреплять файлы к сообщениям
Вы не можете скачивать файлы
Текущее время: 23-Ноя 00:21
Часовой пояс: UTC + 5
Автор | Сообщение |
---|---|
news_bot ®
Стаж: 6 лет 9 месяцев |
|
1. Introduction How can I easily test my MongoDB multi-document transaction code without setting up MongoDB on my device? One might argue that they have to set it up first because in order to carry out such a transaction it needs a session which requires a replica set. Thankfully, there is no need to create a 3-node replica set and we can run these transactions only against a single database instance. To achieve this, we may do the following:
It is worth mentioning that a replica set is not the only option here because MongoDB version 4.2 introduces distributed transactions in sharded clusters, which is beyond the scope of this article. There are a lot of ways of how to initialize a replica set, including Docker compose, bash scripts, services in a CI/CD etc. However, it takes some extra work in terms of scripting, handling random ports, and making it part of the CI/CD process. Fortunately, starting from Testcontainers’ version 1.14.2 we are able to delegate all the heavy lifting to the MongoDB Module. Let us try it out on a small warehouse management system based on Spring Boot 2.3. In the recent past one had to use ReactiveMongoOperations and its inTransaction method, but since Spring Data MongoDB 2.2 M4 we have been able to leverage the good old @Transactional annotation or more advanced TransactionalOperator. Our application should have a REST API to provide the information on successfully processed files including the number of the documents modified. Regarding the files causing errors along the way, we should skip them to process all the files. It may be noted that even though duplicated articles and their sizes within a single file are a rare case, this possibility is quite realistic, and therefore should be handled as well. As per business requirements to our system, we already have some products in our database and we upload a bunch of Excel (xlsx) files to update some fields of the matched documents in our storage. Data is supposed to be only at the first sheet of any workbook. Each file is processed in a separate multi-document transaction to prevent simultaneous modifications of the same documents. For example, Figure 1 shows collision cases on how a transaction ends up except for a possible scenario when transactions are executed sequentially (json representation is shortened here for the sake of simplicity). Transactional behavior helps us to avoid clashing the data and guarantees consistency. Figure 1 Transaction sequence diagram: collision cases As for a product collection, we have an article as a unique index. At the same time, each article is bound to a concrete size. Therefore, it is important for our application to verify that both of them are in the database before updating. Figure 2 gives an insight into this collection. Figure 2 Product collection details 2. Business logic implementation Let us elaborate on the major points of the above-mentioned business logic and start with ProductController as an entry point for the processing. You can find a complete project on GitHub. Prerequisites are Java8+ and Docker. @PatchMapping(
consumes = MediaType.MULTIPART_FORM_DATA_VALUE, produces = MediaType.APPLICATION_STREAM_JSON_VALUE ) public ResponseEntity<Flux<FileUploadDto>> patchProductQuantity( @RequestPart("file") Flux<FilePart> files, @AuthenticationPrincipal Principal principal ) { log.debug("shouldPatchProductQuantity"); return ResponseEntity.accepted().body( uploadProductService.patchProductQuantity(files, principal.getName()) ); } 1) Wrap a response in a ResponseEntity and return the flux of the FileUploadDto; 2) Get a current authentication principal, coming in handy later on; 3) Pass the flux of the FilePart to process. Here is the patchProductQuantity method of the UploadProductServiceImpl: public Flux<FileUploadDto> patchProductQuantity(
final Flux<FilePart> files, final String userName ) { return Mono.fromRunnable(() -> initRootDirectory(userName)) .publishOn(Schedulers.newBoundedElastic(1, 1, "initRootDirectory")) .log(String.format("cleaning-up directory: %s", userName)) .thenMany(files.flatMap(f -> saveFileToDiskAndUpdate(f, userName) .subscribeOn(Schedulers.boundedElastic()) ) ); } 1) Use the name of the user as the root directory name; 2) Do the blocking initialization of the root directory on a separate elastic thread; 3) For each Excel file: 3.1) Save it on a disk; 3.2) Then update the quantity of the products on a separate elastic thread, as blocking processing of the file is ran. The saveFileToDiskAndUpdate method does the following logic: private Mono<FileUploadDto> saveFileToDiskAndUpdate(
final FilePart file, final String userName ) { final String fileName = file.filename(); final Path path = Paths.get(pathToStorage, userName, fileName); return Mono.just(path) .log(String.format("A file: %s has been uploaded", fileName)) .flatMap(file::transferTo) .log(String.format("A file: %s has been saved", fileName)) .then(processExcelFile(fileName, userName, path)); }
At this point, we are going to divide logic in accordance with the size of the file: private Mono<FileUploadDto> processExcelFile(
final String fileName, final String userName, final Path path ) { return Mono.fromCallable(() -> Files.size(path)) .flatMap(size -> { if (size >= bigFileSizeThreshold) { return processBigExcelFile(fileName, userName); } else { return processSmallExcelFile(fileName, userName); } }); }
Before going into detail on processing Excel files depending on their size, we should take a look at the getProducts method of the ExcelFileDaoImpl: @Override
public Flux<Product> getProducts( final String pathToStorage, final String fileName, final String userName ) { return Flux.defer(() -> { FileInputStream is; Workbook workbook; try { final File file = Paths.get(pathToStorage, userName, fileName).toFile(); verifyFileAttributes(file); is = new FileInputStream(file); workbook = StreamingReader.builder() .rowCacheSize(ROW_CACHE_SIZE) .bufferSize(BUFFER_SIZE) .open(is); } catch (IOException e) { return Mono.error(new UploadProductException( String.format("An exception has been occurred while parsing a file: %s " + "has been saved", fileName), e)); } try { final Sheet datatypeSheet = workbook.getSheetAt(0); final Iterator<Row> iterator = datatypeSheet.iterator(); final AtomicInteger rowCounter = new AtomicInteger(); if (iterator.hasNext()) { final Row currentRow = iterator.next(); rowCounter.incrementAndGet(); verifyExcelFileHeader(fileName, currentRow); } return Flux.<Product>create(fluxSink -> fluxSink.onRequest(value -> { try { for (int i = 0; i < value; i++) { if (!iterator.hasNext()) { fluxSink.complete(); return; } final Row currentRow = iterator.next(); final Product product = Objects.requireNonNull(getProduct( FileRow.builder() .fileName(fileName) .currentRow(currentRow) .rowCounter(rowCounter.incrementAndGet()) .build() ), "product is not supposed to be null"); fluxSink.next(product); } } catch (Exception e1) { fluxSink.error(e1); } })).doFinally(signalType -> { try { is.close(); workbook.close(); } catch (IOException e1) { log.error("Error has occurred while releasing {} resources: {}", fileName, e1); } }); } catch (Exception e) { return Mono.error(e); } }); }
Getting back to the processing of the Excel files in the UploadProductServiceImpl, we are going to use the MongoDB’s bulkWrite method on a collection to update products in bulk, which requires the eagerly evaluated list of the UpdateOneModel. In practice, collecting such a list is a memory-consuming operation, especially for big files. Regarding small Excel files, we provide a more detailed log and do additional validation check: private Mono<FileUploadDto> processSmallExcelFile(
final String fileName, final String userName ) { log.debug("processSmallExcelFile: {}", fileName); return excelFileDao.getProducts(pathToStorage, fileName, userName) .reduce(new ConcurrentHashMap<ProductArticleSizeDto, Tuple2<UpdateOneModel<Document>, BigInteger>>(), (indexMap, product) -> { final BigInteger quantity = product.getQuantity(); indexMap.merge( new ProductArticleSizeDto(product.getArticle(), product.getSize()), Tuples.of( updateOneModelConverter.convert(Tuples.of(product, quantity, userName)), quantity ), (oldValue, newValue) -> { final BigInteger mergedQuantity = oldValue.getT2().add(newValue.getT2()); return Tuples.of( updateOneModelConverter.convert(Tuples.of(product, mergedQuantity, userName)), mergedQuantity ); } ); return indexMap; }) .filterWhen(productIndexFile -> productDao.findByArticleIn(extractArticles(productIndexFile.keySet())) .<ProductArticleSizeDto>handle( (productArticleSizeDto, synchronousSink) -> { if (productIndexFile.containsKey(productArticleSizeDto)) { synchronousSink.next(productArticleSizeDto); } else { synchronousSink.error(new UploadProductException( String.format( "A file %s does not have an article: %d with size: %s", fileName, productArticleSizeDto.getArticle(), productArticleSizeDto.getSize() ) )); } }) .count() .handle((sizeDb, synchronousSink) -> { final int sizeFile = productIndexFile.size(); if (sizeDb == sizeFile) { synchronousSink.next(Boolean.TRUE); } else { synchronousSink.error(new UploadProductException( String.format( "Inconsistency between total element size in MongoDB: %d and a file %s: %d", sizeDb, fileName, sizeFile ) )); } }) ).onErrorResume(e -> { log.debug("Exception while processExcelFile fileName: {}: {}", fileName, e); return Mono.empty(); }).flatMap(productIndexFile -> productPatcherService.incrementProductQuantity( fileName, productIndexFile.values().stream().map(Tuple2::getT1).collect(Collectors.toList()), userName ) ).map(bulkWriteResult -> FileUploadDto.builder() .fileName(fileName) .matchedCount(bulkWriteResult.getMatchedCount()) .modifiedCount(bulkWriteResult.getModifiedCount()) .build() ); }
Even though the filterWhen and the subsequent productDao.findByArticleIn allow us to do some additional validation at an early stage, they come at a price, which is especially noticeable while processing big files in practice. However, the incrementProductQuantity method can compare the number of modified documents and match them against the number of the distinct products in the file. Knowing that, we can implement a more light-weight option to process big files: private Mono<FileUploadDto> processBigExcelFile(
final String fileName, final String userName ) { log.debug("processBigExcelFile: {}", fileName); return excelFileDao.getProducts(pathToStorage, fileName, userName) .reduce(new ConcurrentHashMap<Product, Tuple2<UpdateOneModel<Document>, BigInteger>>(), (indexMap, product) -> { final BigInteger quantity = product.getQuantity(); indexMap.merge( product, Tuples.of( updateOneModelConverter.convert(Tuples.of(product, quantity, userName)), quantity ), (oldValue, newValue) -> { final BigInteger mergedQuantity = oldValue.getT2().add(newValue.getT2()); return Tuples.of( updateOneModelConverter.convert(Tuples.of(product, mergedQuantity, userName)), mergedQuantity ); } ); return indexMap; }) .map(indexMap -> indexMap.values().stream().map(Tuple2::getT1).collect(Collectors.toList())) .onErrorResume(e -> { log.debug("Exception while processExcelFile: {}: {}", fileName, e); return Mono.empty(); }).flatMap(dtoList -> productPatcherService.incrementProductQuantity( fileName, dtoList, userName ) ).map(bulkWriteResult -> FileUploadDto.builder() .fileName(fileName) .matchedCount(bulkWriteResult.getMatchedCount()) .modifiedCount(bulkWriteResult.getModifiedCount()) .build() ); } Here is the ProductAndUserNameToUpdateOneModelConverter that we have used to create an UpdateOneModel: @Component
public class ProductAndUserNameToUpdateOneModelConverter implements Converter<Tuple3<Product, BigInteger, String>, UpdateOneModel<Document>> { @Override @NonNull public UpdateOneModel<Document> convert(@NonNull Tuple3<Product, BigInteger, String> source) { Objects.requireNonNull(source); final Product product = source.getT1(); final BigInteger quantity = source.getT2(); final String userName = source.getT3(); return new UpdateOneModel<>( Filters.and( Filters.eq(Product.SIZE_DB_FIELD, product.getSize().name()), Filters.eq(Product.ARTICLE_DB_FIELD, product.getArticle()) ), Document.parse( String.format( "{ $inc: { %s: %d } }", Product.QUANTITY_DB_FIELD, quantity ) ).append( "$set", new Document( Product.LAST_MODIFIED_BY_DB_FIELD, userName ) ), new UpdateOptions().upsert(false) ); } }
Now we are ready to implement the central part of our processing which is the incrementProductQuantity method of the ProductPatcherDaoImpl: @Override
public Mono<BulkWriteResult> incrementProductQuantity( final String fileName, final List<UpdateOneModel<Document>> models, final String userName ) { return transactionalOperator.execute( action -> reactiveMongoOperations.getCollection(Product.COLLECTION_NAME) .flatMap(collection -> Mono.from(collection.bulkWrite(models, new BulkWriteOptions().ordered(true))) ).<BulkWriteResult>handle((bulkWriteResult, synchronousSink) -> { final int fileCount = models.size(); if (Objects.equals(bulkWriteResult.getModifiedCount(), fileCount)) { synchronousSink.next(bulkWriteResult); } else { synchronousSink.error( new IllegalStateException( String.format( "Inconsistency between modified doc count: %d and file doc count: %d. Please, check file: %s", bulkWriteResult.getModifiedCount(), fileCount, fileName ) ) ); } }).onErrorResume( e -> Mono.fromRunnable(action::setRollbackOnly) .log("Exception while incrementProductQuantity: " + fileName + ": " + e) .then(Mono.empty()) ) ).singleOrEmpty(); }
One would say: "You called collection.bulkWrite(models, new BulkWriteOptions().ordered(true)), what about setting a session?". The thing is that the SessionAwareMethodInterceptor of the Spring Data MongoDB does it via reflection: ReflectionUtils.invokeMethod(targetMethod.get(), target,
prependSessionToArguments(session, methodInvocation) Here is the prependSessionToArguments method: private static Object[] prependSessionToArguments(ClientSession session, MethodInvocation invocation) {
Object[] args = new Object[invocation.getArguments().length + 1]; args[0] = session; System.arraycopy(invocation.getArguments(), 0, args, 1, invocation.getArguments().length); return args; } 1) Get the arguments of the MethodInvocation; 2) Add session as a the first element in the args array. In fact, the following method of the MongoCollectionImpl is called: @Override
public Publisher<BulkWriteResult> bulkWrite(final ClientSession clientSession, final List<? extends WriteModel<? extends TDocument>> requests, final BulkWriteOptions options) { return Publishers.publish( callback -> wrapped.bulkWrite(clientSession.getWrapped(), requests, options, callback)); } 3. Test implementation So far so good, we can create integration tests to cover our logic. To begin with, we create ProductControllerITTest to test our public API via the Spring’s WebTestClient and initialize a MongoDB instance to run tests against: private static final MongoDBContainer MONGO_DB_CONTAINER =
new MongoDBContainer("mongo:4.2.8"); 1) Use a static field to have single Testcontainers’ MongoDBContainer per all test methods in ProductControllerITTest; 2) We use 4.2.8 MongoDB container version from Docker Hub as it is the latest stable one, otherwise MongoDBContainer defaults to 4.0.10. Then in static methods setUpAll and tearDownAll we start and stop the MongoDBContainer respectively: @BeforeAll
static void setUpAll() { MONGO_DB_CONTAINER.start(); } @AfterAll static void tearDownAll() { MONGO_DB_CONTAINER.stop(); } tearDown is responsible for cleaning up the modifications that each test does: @AfterEach
void tearDown() { StepVerifier.create(productDao.deleteAll()).verifyComplete(); } Next we set spring.data.mongodb.uri by executing MONGO_DB_CONTAINER.getReplicaSetUrl() in ApplicationContextInitializer: static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
@Override public void initialize(@NotNull ConfigurableApplicationContext configurableApplicationContext) { TestPropertyValues.of( String.format("spring.data.mongodb.uri: %s", MONGO_DB_CONTAINER.getReplicaSetUrl()) ).applyTo(configurableApplicationContext); } } Now we are ready to write a first test without any transaction collision, because our test files (see Figure 3) have products whose articles do not clash with one another. Figure 3 Excel files causing no collision in the articles of the products @WithMockUser(
username = SecurityConfig.ADMIN_NAME, password = SecurityConfig.ADMIN_PAS, authorities = SecurityConfig.WRITE_PRIVILEGE ) @Test void shouldPatchProductQuantity() { //GIVEN insertMockProductsIntoDb(Flux.just(product1, product2, product3)); final BigInteger expected1 = BigInteger.valueOf(16); final BigInteger expected2 = BigInteger.valueOf(27); final BigInteger expected3 = BigInteger.valueOf(88); final String fileName1 = "products1.xlsx"; final String fileName3 = "products3.xlsx"; final String[] fileNames = {fileName1, fileName3}; final FileUploadDto fileUploadDto1 = ProductTestUtil.mockFileUploadDto(fileName1, 2); final FileUploadDto fileUploadDto3 = ProductTestUtil.mockFileUploadDto(fileName3, 1); //WHEN final WebTestClient.ResponseSpec exchange = webClient .patch() .uri(BASE_URL) .contentType(MediaType.MULTIPART_FORM_DATA) .body(BodyInserters.fromMultipartData(ProductTestUtil.getMultiPartFormData(fileNames))) .exchange(); //THEN exchange.expectStatus().isAccepted(); exchange.expectBodyList(FileUploadDto.class) .hasSize(2) .contains(fileUploadDto1, fileUploadDto3); StepVerifier.create(productDao.findAllByOrderByQuantityAsc()) .assertNext(product -> assertEquals(expected1, product.getQuantity())) .assertNext(product -> assertEquals(expected2, product.getQuantity())) .assertNext(product -> assertEquals(expected3, product.getQuantity())) .verifyComplete(); } Finally, let us test a transaction collision in action, keeping in mind Figure 1 and Figure 4 showing such files: Figure 4 Excel files causing a collision in the articles of the products @WithMockUser(
username = SecurityConfig.ADMIN_NAME, password = SecurityConfig.ADMIN_PAS, authorities = SecurityConfig.WRITE_PRIVILEGE ) @Test void shouldPatchProductQuantityConcurrently() { //GIVEN TransactionUtil.setMaxTransactionLockRequestTimeoutMillis( 20, MONGO_DB_CONTAINER.getReplicaSetUrl() ); insertMockProductsIntoDb(Flux.just(product1, product2)); final String fileName1 = "products1.xlsx"; final String fileName2 = "products2.xlsx"; final String[] fileNames = {fileName1, fileName2}; final BigInteger expected120589Sum = BigInteger.valueOf(19); final BigInteger expected120590Sum = BigInteger.valueOf(32); final BigInteger expected120589T1 = BigInteger.valueOf(16); final BigInteger expected120589T2 = BigInteger.valueOf(12); final BigInteger expected120590T1 = BigInteger.valueOf(27); final BigInteger expected120590T2 = BigInteger.valueOf(11); final FileUploadDto fileUploadDto1 = ProductTestUtil.mockFileUploadDto(fileName1, 2); final FileUploadDto fileUploadDto2 = ProductTestUtil.mockFileUploadDto(fileName2, 2); //WHEN final WebTestClient.ResponseSpec exchange = webClient .patch() .uri(BASE_URL) .contentType(MediaType.MULTIPART_FORM_DATA) .accept(MediaType.APPLICATION_STREAM_JSON) .body(BodyInserters.fromMultipartData(ProductTestUtil.getMultiPartFormData(fileNames))) .exchange(); //THEN exchange.expectStatus().isAccepted(); assertThat( extractBodyArray(exchange), either(arrayContaining(fileUploadDto1)) .or(arrayContaining(fileUploadDto2)) .or(arrayContainingInAnyOrder(fileUploadDto1, fileUploadDto2)) ); final List<Product> list = productDao.findAll(Sort.by(Sort.Direction.ASC, "article")) .toStream().collect(Collectors.toList()); assertThat(list.size(), is(2)); assertThat( list.stream().map(Product::getQuantity).toArray(BigInteger[]::new), either(arrayContaining(expected120589T1, expected120590T1)) .or(arrayContaining(expected120589T2, expected120590T2)) .or(arrayContaining(expected120589Sum, expected120590Sum)) ); TransactionUtil.setMaxTransactionLockRequestTimeoutMillis( 5, MONGO_DB_CONTAINER.getReplicaSetUrl() ); }
Note that the maxTransactionLockRequestTimeoutMillis setting has no sense for this particular test case and serves the purpose of the example. After running this test class 120 times via a script ./load_test.sh 120 ProductControllerITTest.shouldPatchProductQuantityConcurrently in the tools directory of the project, I got the following figures: indicator 20ms, times 5ms(default), times T1 successes 61 56 T2 successes 57 63 T1 and T2 success 2 1 Figure 5 Running the shouldPatchProductQuantityConcurrently test 120 times with 20 and 5 ms maxTransactionLockRequestTimeoutMillis respectively While going through logs, we may come across something like: Exception while incrementProductQuantity: products1.xlsx: com.mongodb.MongoCommandException: Command failed with error 112 (WriteConflict): 'WriteConflict' on server…
Initiating transaction rollback… Initiating transaction commit… About to abort transaction for session… About to commit transaction for session... @WithMockUser(
username = SecurityConfig.ADMIN_NAME, password = SecurityConfig.ADMIN_PAS, authorities = SecurityConfig.WRITE_PRIVILEGE ) @Test void shouldPatchProductQuantityBigFile() { //GIVEN unzipClassPathFile("products_1M.zip"); final String fileName = "products_1M.xlsx"; final int count = 1000000; final long totalQuantity = 500472368779L; final List<Document> products = getDocuments(count); TransactionUtil.setTransactionLifetimeLimitSeconds( 900, MONGO_DB_CONTAINER.getReplicaSetUrl() ); StepVerifier.create( reactiveMongoTemplate.remove(new Query(), Product.COLLECTION_NAME) .then(reactiveMongoTemplate.getCollection(Product.COLLECTION_NAME)) .flatMapMany(c -> c.insertMany(products)) .switchIfEmpty(Mono.error(new RuntimeException("Cannot insertMany"))) .then(getTotalQuantity()) ).assertNext(t -> assertEquals(totalQuantity, t)).verifyComplete(); //WHEN final Instant start = Instant.now(); final WebTestClient.ResponseSpec exchange = webClient .patch() .uri(BASE_URL) .contentType(MediaType.MULTIPART_FORM_DATA) .accept(MediaType.APPLICATION_STREAM_JSON) .body(BodyInserters.fromMultipartData(ProductTestUtil.getMultiPartFormData("products_1M.xlsx"))) .exchange(); //THEN exchange .expectStatus() .isAccepted() .expectBodyList(FileUploadDto.class) .contains(ProductTestUtil.mockFileUploadDto(fileName, count)); StepVerifier.create(getTotalQuantity()) .assertNext(t -> assertEquals(totalQuantity * 2, t)) .verifyComplete(); log.debug( "============= shouldPatchProductQuantityBigFile elapsed {}minutes =============", Duration.between(start, Instant.now()).toMinutes() ); }
Such a test requires a relatively big heap of about 4GB (see Figure 6): Figure 6 VisualVM Monitor Heap while uploading a 1-million-product file As we can see, it is sensible to configure the maximum amount of disk space allowed for file parts and the maximum number of parts allowed in a given multipart request. Which is why I added properties to a proper application.yml file and then set them in the configureHttpMessageCodecs method of the implemented WebFluxConfigurer. However, adding Rate Limiter and configuring Schedulers might be a better solution in production environment. Note that we use Schedulers.boundedElastic() here having a pool of 10 * Runtime.getRuntime().availableProcessors() threads by default. Here is TransactionUtilcontaining the above-mentioned helper methods: public class TransactionUtil {
private TransactionUtil() { } public static void setTransactionLifetimeLimitSeconds( final int duration, final String replicaSetUrl ) { setMongoParameter("transactionLifetimeLimitSeconds", duration, replicaSetUrl); } public static void setMaxTransactionLockRequestTimeoutMillis( final int duration, final String replicaSetUrl ) { setMongoParameter("maxTransactionLockRequestTimeoutMillis", duration, replicaSetUrl); } private static void setMongoParameter( final String param, final int duration, final String replicaSetUrl ) { try (final MongoClient mongoReactiveClient = MongoClients.create( ConnectionUtil.getMongoClientSettingsWithTimeout(replicaSetUrl) )) { StepVerifier.create(mongoReactiveClient.getDatabase("admin").runCommand( new Document("setParameter", 1).append(param, duration) )).expectNextCount(1) .verifyComplete(); } } } 4. How can I play with the code? Small WMS (warehouse management system) on GitHub. 5. What’s in it for me?
6. Want to go deeper? To construct a multi-node MongoDB replica set for testing complicated failover cases, consider the mongodb-replica-set project. 7. Links
=========== Источник: habr.com =========== Похожие новости:
Программирование ) |
|
Вы не можете начинать темы
Вы не можете отвечать на сообщения
Вы не можете редактировать свои сообщения
Вы не можете удалять свои сообщения
Вы не можете голосовать в опросах
Вы не можете прикреплять файлы к сообщениям
Вы не можете скачивать файлы
Вы не можете отвечать на сообщения
Вы не можете редактировать свои сообщения
Вы не можете удалять свои сообщения
Вы не можете голосовать в опросах
Вы не можете прикреплять файлы к сообщениям
Вы не можете скачивать файлы
Текущее время: 23-Ноя 00:21
Часовой пояс: UTC + 5