Journal of Industrial and Management Optimization
January 2019 , Volume 15 , Issue 1
Select all articles
In this paper, we consider a cognitive radio network with multiple secondary users (SUs). The SU packets in the system can be divided into two categories: SU1 packets and SU2 packets, where SU1 packets have transmission priority over SU2 packets. Considering the absolute priority of the primary users (PUs), the PU packets have the highest priority in the system to transmit. In order to guarantee the Quality of Service (QoS) of the network users, as well as reduce the average delay of the SU2 packets, we propose an adjustable access control scheme for the SU2 packets. A newly arriving SU2 packet can access the system with an access probability related to the total number of packets in the system. A variable factor is also introduced to adjust the access probability dynamically. Based on the working principle of the adjustable access control scheme, we build a discrete-time queueing model with a finite waiting room and an adjustable joining rate. With a steady-state analysis of the queueing model, using a three-dimensional Markov chain, we derive some performance measures, such as the total channel utilization, the interruption rate, the throughput, and the average delay of the SU2 packets. Moreover, we show the influence of the adjustment factor on different system performance measures by using numerical results. Finally, considering the trade-off between the throughput and the average delay of the SU2 packets with respect to the adjustment factor, we build a net benefit function and show an optimal algorithm to optimize the adjustment factor.
This paper considers a multiserver retrial queue with setup time which is motivated from application in data centers with the ON-OFF policy, where an idle server is immediately turned off. The ON-OFF policy is designed to save energy consumption of idle servers because an idle server still consumes about 60% of its peak consumption processing jobs. Upon arrival, a job is allocated to one of available off-servers and that server is started up. Otherwise, if all the servers are not available upon arrival, the job is blocked and retries in a random time. A server needs some setup time during which the server cannot process a job but consumes energy. We formulate this model using a three-dimensional continuous-time Markov chain obtaining the stability condition via Foster-Lyapunov criteria. Interestingly, the stability condition is different from that of the corresponding non-retrial queue. Furthermore, exploiting the special structure of the Markov chain together with a heuristic technique, we develop an efficient algorithm for computing the stationary distribution. Numerical results reveal that under the ON-OFF policy, allowing retrials is more power-saving than buffering jobs. Furthermore, we obtain a new insight that if the setup time is relatively long, setting an appropriate retrial time could reduce both power consumption and the mean response time of jobs.
This paper considers a discrete-time single-server infinite-capacity queue with two classes of packet arrivals, either delay-sensitive (class 1) or delay-tolerant (class 2), and a reservation-based priority scheduling mechanism. The objective is to provide a better quality of service to delay-sensitive packets at the cost of allowing higher delays for the best-effort packets. To this end, the scheduling mechanism makes use of an in-queue reserved place intended for future class-1 packet arrivals. A class-1 arrival takes the place of the reservation in the queue, after which a new reservation is created at the tail of the queue. Class-2 arrivals always take place at the tail of the queue. We study the delay characteristics for both packet classes under the assumption of a general independent packet arrival process. The service times of the packets are independent and have a general distribution that depends on the class of the packet. Closed-form expressions are obtained for the probability generating functions of the per-class delays. From this, moments and tail probabilities of the packet delays of both classes are derived. The results are illustrated by some numerical examples.
We study the convergence of the log-exponential regularization method for mathematical programs with vertical complementarity constraints (MPVCC). The previous paper assume that the sequence of Lagrange multipliers are bounded and it can be found by software. However, the KKT points can not be computed via Matlab subroutines exactly in many cases. We note that it is realistic to compute inexact KKT points from a numerical point of view. We prove that, under the MPVCC-MFCQ assumption, the accumulation point of the inexact KKT points is Clarke (C-) stationary point. The idea of inexact KKT conditions can be used to define stopping criteria for many practical algorithms. Furthermore, we introduce a feasible strategy that guarantees inexact KKT conditions and provide some numerical examples to certify the reliability of the approach. It means that we can apply the inexact regularization method to solve MPVCC and show the advantages of the improvement.
This study addresses an investment problem facing a venture fund manager who has a non-smooth utility function. The theoretical model characterizes an absolute performance-based compensation package. Technically, the research methodology features stochastic control and optimal stopping by formulating a free-boundary problem with a nonlinear equation, which is transferred to a new one with a linear equation. Numerical results based on simulations are presented to better illustrate this practical investment decision mechanism.
This paper investigates the design of non-uniform cosine modulated filter bank (CMFB) with both finite precision coefficients and infinite precision coefficients. The finite precision filter bank has been designed to reduce the computational complexity related to the multiplication operations in the filter bank. Here, non-uniform filter bank (NUFB) is obtained by merging the appropriate filters of an uniform filter bank. An efficient optimization approach is developed for the design of non-uniform CMFB with infinite precision coefficients. A new procedure based on the discrete filled function is then developed to design the filter bank prototype filter with finite precision coefficients. Design examples demonstrate that the designed filter banks with both infinite precision coefficients and finite precision coefficients have low distortion and better performance when compared with other existing methods.
In this paper, we introduce a new methodology for modeling of the given data and finding the global optimum value of the model function. First, a new surface blending technique is offered by using Bezier curves and a smooth objective function is obtained with the help of this technique. Second, a new global optimization method followed by an adapted algorithm is presented to reach the global minimizer of the objective function. As an application of this new methodology, we consider energy conformation problem in Physical Chemistry as a very important real-world problem.
Project procurement has two important attributes: cost uncertainty and failure risk. Due to the incomplete feature of such attributes, a novel mechanism incorporating contingent payments and cost sharing contracts is proposed for the buyer. Constructing models of bid decisions for risk averse and risk neutral suppliers, respectively, closed-form solutions of optimal bid prices are derived. By investigating the properties of bid prices in a first-score sealed-bid reverse auction, we find that when the degree of risk aversion or the variance of unpredictable cost is sufficiently small, bid prices of risk averse suppliers could be lower than those of risk neutral suppliers. Yet risk averse suppliers always bid higher than risk neutral suppliers in a second-score sealed-bid reverse auction. An interesting result verified by numerical experiments is that the classical revenue equivalence theorem no longer holds for the proposed mechanism if suppliers involve risk averse behavior. In this case, the buyer's best choice is to adopt a first-score sealed-bid reverse auction. We also provide decision support for the buyer to achieve optimal expected profit.
In this paper, we propose a partial convolution model for image deblurring and denoising. We also devise a new linearized alternating direction method of multipliers (ADMM) with an extension step. As the computation of its subproblem is simple enough to have closed-form solutions, its per-iteration cost is low; however, the relaxed parameter condition together with the extra extension step inspired by Ye and Yuan's ADMM enables faster convergence than the original linearized ADMM. Preliminary experimental results show that our algorithm can produce better quality results than some existing efficient algorithms within a similar computation time. The performance advantage of our algorithm is particularly evident at high noise ratios.
A technique of introducing the re-sampling step of particle filter is proposed to improve the particle swarm optimization (PSO) algorithm, a typical global search algorithm. The re-sampling step can decrease particles with low weights and duplicate particles with high weights, given that we define a type of suitable weights for the particles. To prevent the identity of particles, the re-sampling step is followed by the existing method of particle variation. Through this technique, the local search capability is enhanced greatly in the later searching stage of PSO algorithm. More interesting, this technique can also be employed to improve another algorithm of which the philosophy is "learning from neighbors", i.e., the neighborhood field optimization (NFO) algorithm. The improved algorithms (PSO-resample and NFO-resample) are compared with other metaheuristic algorithms through extensive simulations. The experiments show that the improved algorithms are superior in terms of convergence rate, search accuracy and robustness. Our results also suggest that the proposed technique can be general in the sense that it can probably improve other particle-based intelligent algorithms.
This paper examines the waste of electrical and electronic equipments (WEEE) and draws on variational inequalities to model the closed-loop supply chain network. The network consists of manufacturers, retailers and consumer markets engaging in a Cournot-Nash game. Retailers are responsible for collecting WEEE in the network. It is assumed that the price of the remanufactured goods is different from that of the newly manufactured ones. The network equilibrium occurs when all players agree on volumes and prices. Several properties of the model are examined and the modified projection method is utilized to obtain the optimal solutions. Numerical examples are provided to illustrate the impact of CLSC parameters on the profits of channel members and consumer benefits, and to provide policy support for governments. We find that it is necessary to regulate a medium collection rate and a certain minimum recovery rate. This is also advantageous to manufacturers in producing new manufactured products. The impact of collection rate and recovery rate on manufacturers are greater than that on retailers. Consumers can benefit from the increase of the recovery rate as well as the collection rate.
In this paper, the blocking conditions are investigated in permutation flow shop, general flow shop and job shop environments, in which there are no buffer storages between any pair of machines. Based on an alternative graph that is an extension of classical disjunctive graph, a new and generic polynomial-time algorithm is proposed to construct a feasible schedule with a given job processing sequence, especially for satisfying complex blocking constraints in multi-stage scheduling environments. To highlight the state-of-the-art of the proposed algorithm, a comparative analysis is conducted in comparison to two other constructive algorithms in the literature. The comparison shows that the proposed algorithm has the following advantages:
In this paper, we adapt a nature-inspired optimization approach, the water flow algorithm, for solving the quadratic assignment problem. The algorithm imitates the hydrological cycle in meteorology and the erosion phenomenon in nature. In this algorithm, a systematic precipitation generating scheme is included to increase the spread of the raindrop positions on the ground to increase the solution exploration capability of the algorithm. Efficient local search methods are also used to enhance the solution exploitation capability of the algorithm. In addition, a parallel computing strategy is integrated into the algorithm to speed up the computation time. The performance of the algorithm is tested with the benchmark instances of the quadratic assignment problem taken from the QAPLIB. The computational results and comparisons show that our algorithm is able to obtain good quality or optimal solutions to the benchmark instances within reasonable computation time.
In this work we develop PDE-based mathematical models for valuing real options on investment project expansions when the underlying commodity price follows a geometric Brownian motion. The models developed are of a similar form as the Black-Scholes model for pricing conventional European call options. However, unlike the Black-Scholes' model, the payoff conditions of the current models are determined by a PDE system. An upwind finite difference scheme is used for solving the models. Numerical experiments have been performed using two examples of pricing project expansion options in the mining industry to demonstrate that our models are able to produce financially meaningful numerical results for the two non-trivial test problems.
Dynamic Data Envelopment Analysis (DDEA) deals with efficiency analysis of decision making units in time dependent situations. A finite number of time periods and some carry-over activities between each two consecutive periods are assumed in DDEA. There are many models in DEA for efficiency evaluation of decision making units over time periods. One important class of dynamic models is the class of slacks-based models. By using a numerical example we show that some slacks-based DDEA models, especially ones proposed by Tone and Tsutsui, suffer from efficiency overestimation. A new dynamic slacks-based DEA model is proposed to overcome the deficiencies of the available slacks-based models. The model proposed in this paper is capable of revealing all sources of inefficiencies and providing more discrimination between decision making units. The theoretical and practical examinations demonstrate the merits of the new model.
In this paper, we consider the valuation of vulnerable options under a Markov-modulated jump-diffusion model, where the option writer's asset value is subject to price pressure from other financial institutions due to distressed selling. A change of numéraire technique, proposed by Geman et al. [
In this paper, we present a modified extragradient-type method for solving the variational inequality problem involving uniformly continuous pseudomonotone operator. It is shown that under certain mild assumptions, this method is strongly convergent in infinite dimensional real Hilbert spaces. We give some numerical computational experiments which involve a comparison of our proposed method with other existing method in a model on industrial electricity production.
The paper considers the pricing problem of complementary products in a fuzzy dual-channel supply chain environment where there are two manufacturers and one retailer. Four different decision models are established to study this problem: the centralized decision model, MS-Bertrand model, RS-Bertrand model and Nash game model, where the consumer demand and manufacturing cost for each product are characterized as fuzzy variables. A closed form solution has been obtained for each model by using game theory and fuzzy theory. Numerical examples are presented to compare the maximal expected profits and optimal pricing decisions, and to provide additional managerial insights. The finding shows that the decision makers are more likely to choose industries with higher self-price elastic coefficient and lower complementarity in the retail channel to cooperate. We can obtain that consumers can benefit from the cooperation of the two manufacturers because of lower prices. We can also find that it might not be bad for retailer because it can expand demand and obtain more maximal expected profits.
In Bitcoin system, transactions are prioritized according to transaction fees. Transactions without fees are given low priority and likely to wait for confirmation. Because the demand of micro payment in Bitcoin is expected to increase due to low remittance cost, it is important to quantitatively investigate how transactions with small fees of Bitcoin affect the transaction-confirmation time. In this paper, we analyze the transaction-confirmation time by queueing theory. We model the transaction-confirmation process of Bitcoin as a priority queueing system with batch service, deriving the mean transaction-confirmation time. Numerical examples show how the demand of transactions with low fees affects the transaction-confirmation time. We also consider the effect of the maximum block size on the transaction-confirmation time.
This paper studies a multi-period portfolio selection problem for retirees during the decumulation phase. We set a series of investment targets over time and aim to minimize the expected losses from the time of retirement to the time of compulsory annuitization by using a quadratic loss function. A target greater than the expected wealth is given and the corresponding explicit expressions for the optimal investment strategy are obtained. In addition, the withdrawal amount for daily life is assumed to be a linear function of the wealth level. Then according to the parameter value settings in the linear function, the withdrawal mechanism is classified as deterministic withdrawal, proportional withdrawal or combined withdrawal. The properties of the investment strategies, targets, bankruptcy probabilities and accumulated withdrawal amounts are compared under the three withdrawal mechanisms. Finally, numerical illustrations are presented to analyze the effects of the final target and the interest rate on some obtained results.
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]